The Indicators Rubric: A Formal Framework for Assessing AI Consciousness
Moving beyond subjective interpretations of machine sentience, a collaborative effort has culminated in the publication of “Identifying Indicators of Consciousness in AI Systems” in Trends in Cognitive Sciences (November 2025). Led by Patrick Butlin and Robert Long, with co-authors including Yoshua Bengio and Tim Bayne, this paper establishes a formal scientific rubric for assessing the potential for consciousness in artificial agents.
The full paper is available here: Identifying Indicators of Consciousness in AI Systems.
From Theories to Indicators
The authors reject the idea of a single “consciousness test.” Instead, they adopt a “natural kind” approach, assuming that consciousness in AI will likely function similarly to consciousness in humans regarding its computational role. They derive a list of indicator properties from established scientific theories like Global Workspace Theory (GWT), Predictive Processing, and Attention Schema Theory.
The proposed rubric includes indicators such as:
- Algorithmic Agency: Does the system learn from feedback and select actions to achieve goals?
- Global Workspace Architecture: Is there a functional bottleneck where information from specialized modules is selected, integrated, and broadcast?
- Metacognition: Does the system monitor the reliability of its own percepts?
- Recurrent Processing: Does the system use feedback loops to refine its internal states over time?
- Attention Schemas: Does the system possess a model of its own attentional process?
The Assessment Strategy
The paper proposes that the more indicators a system satisfies, the higher our credence should be that it possesses some form of consciousness. Crucially, no single indicator is sufficient. A large language model might display “agency” during a conversation but lack the “recurrent processing” or “global workspace” necessary for a unified experience.
This framework shifts the debate from binary arguments (“Is it conscious?”) to a probabilistic assessment (“System X satisfies 3 out of 5 indicators”). This allows researchers to track progress and identify missing architectural components.
Relevance to the ACM
For the Artificial Consciousness Module (ACM), this rubric provides a direct validation checklist.
- Agency: The ACM’s goal-directed behavior engine satisfies the first indicator.
- Global Workspace: The “Conductor” module in the ACM explicitly functions as a global workspace.
- Recurrent Processing: The RIIU (Reflexive Integrated Information Unit) is built on feedback loops.
By aligning the ACM’s development with these formalized indicators, we move from theoretical speculation to building a system that satisfies the rigorous criteria set forth by the broader scientific community. This paper marks the transition of AI consciousness from a philosophical puzzle to an engineering problem with measurable targets.