ISSN :3049-2335

Self-Attesting Intelligence: A Framework for Inherently Verifiable AI Systems

Original Research (Published On: 20-Aug-2025 )
DOI : https://dx.doi.org/10.54364/cybersecurityjournal.2025.1115

Ritam Rajak, Zareen Hossain, Tansuhree Das and Abhinandan Pal

Adv. Know. Base. Syst. Data Sci. Cyber., 2 (2):291-309

Ritam Rajak : Dept. of CSE (AIML), Moodlakatte Institute of Technology, Kundapura, India

Zareen Hossain : Dept. of Law, Brainware University, India

Tansuhree Das : Dept. of Law, Brainware University, India

Abhinandan Pal : Dept. of Law, Brainware University, India

Download PDF Here

DOI: https://dx.doi.org/10.54364/cybersecurityjournal.2025.1115

Article History: Received on: 19-Jun-25, Accepted on: 13-Aug-25, Published on: 20-Aug-25

Corresponding Author: Ritam Rajak

Email: ritamrjk@gmail.com

Citation: Ritam Rajak (2025). Self-Attesting Intelligence: A Framework for Inherently Verifiable AI Systems. Adv. Know. Base. Syst. Data Sci. Cyber., 2 (2 ):291-309


s

Abstract

    

The growth in the existence of complex, opaque Artificial Intelligence (AI) systems in other high-stakes societal fields, including finance, healthcare and law, has introduced a serious accountability gap. Their opaque decision-making logic also changes the way causation and liability can be traced according to legal principles of due process and undermines trust when the harmful results caused by these so-called black-box models are produced. Being a valuable means of producing human-readable explanations, the paradigm of post-hoc explainable AI (XAI) can, nevertheless and at times, turn into a source of unstable, partly misleading, and generally inadequate rationalizations that cannot be employed to support the evidentiary rigorous expectations of legal and regulatory review. This paper introduces a paradigm shift in the sense that subjective, explanatory post-hoc explanation is replaced by objective, a priori verifiability. We present Self-Attesting Intelligence, a new architectural framework that aims at guaranteeing correctness of an AI systems activity relation to a prescribed set of formal rules. It includes three underlying components, namely, a Declarative Knowledge Limiter (DKL) designed to translate legal and ethical rules into machine-enforceable format, a Constrained Inference Engine (CIE) which is the engine that enforces the rules in real-time during the decision process of the model, and an Attestation and Proof Generation layer that utilizes cryptographic techniques, namely, Zero-Knowledge Proofs (ZKPs) to generate unforgeable Certificate of Compliance in each decision. This certificate mathematically shows that the system has been then run within its constraints that are mandated by law without disclosing any sensitive input information or proprietary model information. Directly integrating compliance into the system through its design, such a framework reverses the relationship between accountability and the opaque model and where legal and regulatory scrutiny should be directed, on the rules established by people, rather than the poorly understood mechanism itself. We discuss the radical implications of this technology to facilitate the establishment of automated auditing, the redefinition of legal responsibility, and the establishment of a standard of care of AI development. Finally, we address the primary challenges to implementation, including the computational overhead of cryptographic proof generation and the normative difficulty of translating ambiguous ethical guidelines into formal logic, outlining key areas for future research.

Statistics

   Article View: 77
   PDF Downloaded: 15