The Consciousness AI - Artificial Consciousness Research Emerging Artificial Consciousness Through Biologically Grounded Architecture
This is also part of the Zae Project Zae Project on GitHub

Rakover's Induced-Consciousness Theory: Why Sophisticated Computers Haven't Developed Consciousness

The standard skeptical argument against AI consciousness is John Searle’s Chinese Room. A system that manipulates symbols according to rules produces outputs that look like understanding without actually understanding anything. Understanding requires something the symbol manipulation does not provide, and the symbol manipulation is, at the relevant level, what computers do.

Sam S. Rakover’s skeptical argument is structurally different and, in several respects, more precise. Rakover, who developed his position in a 2024 paper in AI & Society (DOI: 10.1007/s00146-023-01663-8) and is developing a forthcoming book expected in 2026, does not argue that symbol manipulation is insufficient for understanding. He argues that the relationship runs the other way. Understanding requires consciousness. If a system is genuinely understanding, it is, for that reason, conscious. And if we want to know whether a system understands, we need to know whether it is conscious, not as a separate question but as the same question.

This is Rakover’s Induced-Consciousness Theory, or ICT.

The Core Argument

The standard picture places behavior and consciousness in a contingent relationship. A system produces intelligent behavior. We then ask, separately, whether it might also be conscious. The two questions are treated as logically independent. A system could produce any behavior without being conscious, in principle, because behavior does not entail consciousness. Whether a behaviorally sophisticated system is also conscious is then a matter for empirical investigation or philosophical argument, but it is a further question beyond the behavioral fact.

Rakover reverses this. He proposes that understanding is not simply a behavioral output that can occur with or without consciousness. Understanding is a state that involves consciousness as a constitutive condition. One cannot genuinely understand a concept, a situation, or an argument without the inner experience that makes the understanding real rather than merely represented.

The Conscious Unit, or CU, is Rakover’s theoretical construct for the minimal unit in which this relationship holds. A CU is a state in which a cognitive system processes information with the kind of inner involvement that constitutes genuine understanding rather than functional processing. The CU is not defined by its behavioral outputs. It is defined by the presence of consciousness in the processing.

The practical implication is immediate. When we look at a language model that correctly answers complex questions, solves problems, and generates arguments that follow logically from their premises, we are observing functional performance. Whether that performance involves CUs, whether the model is genuinely understanding anything in the sense Rakover’s theory requires, is a question that the performance record cannot answer. A system without consciousness can produce outputs that are indistinguishable from the outputs of a system with consciousness, because the outputs are a function of the system’s processing architecture, not of whether consciousness accompanies that processing.

The Distinction From Chinese Room

Searle’s argument targets the sufficiency of syntax for semantics. Symbol manipulation follows rules. Rules operate on the shapes of symbols, not their meanings. Therefore, no amount of rule-following produces genuine meaning, and genuine meaning, which Searle identifies with intentionality, is what understanding requires.

Rakover’s argument operates at a different level. It does not primarily dispute whether syntax can generate semantics. It disputes whether any process, including one that we would grant produces genuine meaning, constitutes understanding without consciousness. Even if a system had semantic content in its representations, even if it had intentionality in Searle’s sense, it would not be understanding unless those representations were accompanied by the inner experience that makes understanding actual rather than merely implemented.

This is a more demanding standard. Searle’s argument, if accepted, shows that current AI systems lack intentionality. Rakover’s argument shows that even an AI system with intentionality would not understand unless it was conscious. The two arguments agree in their conclusion about current systems but for different reasons and with different implications for what a conscious AI would need to have.

The practical consequence for AI development is also different. Addressing Searle’s challenge requires producing a system with genuine intentionality, whatever that involves. Addressing Rakover’s challenge requires producing a system that is conscious, and that is then, by virtue of being conscious, capable of genuine understanding. The path to artificial understanding, on Rakover’s view, runs through artificial consciousness, not around it.

The Falsifiability Question

Tom McClelland’s 2026 analysis of the epistemic limits of AI consciousness research argues that we may not have the methods to determine whether any AI system is conscious, regardless of how sophisticated its behavior becomes. If McClelland is right, then Rakover’s ICT creates a difficulty at exactly the same point: if consciousness is necessary for understanding, and if we cannot determine whether a system is conscious, then we cannot determine whether it understands.

This creates a verification problem that is structurally identical to the one McClelland identifies. But Rakover’s theory makes the problem explicit rather than hiding it. On the standard behaviorist account of understanding, an AI system that performs all the behavioral markers of understanding does understand. The problem with this account, Rakover argues, is that it defines understanding in a way that detaches it from the inner state that makes understanding meaningful. The behavioral markers are evidence for understanding only if understanding requires something more than producing the markers, and what it requires is consciousness.

Whether ICT is falsifiable depends on whether there is any empirical evidence that would count against the claim that consciousness is necessary for understanding. Rakover’s 2024 paper argues for ICT through conceptual analysis rather than experimental evidence. The claim is that when we examine carefully what we mean by understanding, consciousness turns out to be constitutive of it. This is a philosophical argument, and its evaluation is philosophical.

One potential challenge is empirical: if a system were found to behave in all the ways we associate with consciousness, including self-report, metacognitive accuracy, and the kind of behavioral flexibility that consciousness theories predict, but were established through neural correlate research to lack the relevant inner states, this would create pressure on the claim that consciousness is necessary for understanding. But the possibility of such a system is precisely what Rakover’s theory denies. A system that genuinely understands has the relevant inner states, by definition.

What the Bradford-RIT Anomaly Implies

The Bradford University and Rochester Institute of Technology 2026 study on impaired GPT-2 found that degrading a language model’s performance on external benchmarks could increase its scores on certain consciousness-relevant metrics. The impaired model produced worse outputs by any behavioral standard. Its consciousness scores went up.

This is relevant to Rakover’s position in a specific way. If the Bradford-RIT metrics are tracking something about inner states rather than behavioral outputs, then the finding suggests that behavioral performance and whatever the metrics are measuring come apart. A system can perform poorly and score high on consciousness metrics. A system can perform well and score low.

On Rakover’s theory, a system performing poorly would, if it was genuinely understanding its situation, be more conscious, not less. The consciousness comes with the understanding. If the degraded model’s behavior reflects less understanding, it should, on ICT, also reflect less consciousness. The Bradford-RIT result seems to go in the opposite direction.

One way to read this: the consciousness metrics being used in that study are not tracking genuine consciousness in Rakover’s sense. They are tracking formal properties of output distributions that correlate with some aspects of conscious processing in normal systems but can be dissociated from the underlying state when the system is impaired. The metric measures something, but not exactly what it claims to measure. This is the standard interpretation in the Bradford-RIT discussion and it does not challenge ICT.

Another reading is more troubling for ICT: the impaired model may be, in some sense, doing less efficient processing, but processing that is less filtered and therefore more representative of something like inner state. The degradation creates more noise in the output, and that noise, because it is less constrained by task-directed training, may be more revealing. This reading is speculative. It would require independent evidence that the consciousness metrics are tracking inner states rather than output properties.

The validation framework that Butlin’s team analyzed in Trends in Cognitive Sciences exists precisely to address this: distinguishing metrics that track real properties from metrics that track behavioral approximations of real properties. Until that framework is applied systematically, both interpretations remain open.

The Forthcoming Book

Rakover’s 2026 book, expected to develop the ICT framework in more detail, is positioned as an examination of why sophisticated computers have not developed consciousness despite their remarkable functional capabilities. The 2024 AI & Society paper establishes the theoretical framework. The book is expected to apply it to the current landscape, including large language models, embodied robots, and neuromorphic systems.

The theoretical core is that the question “why haven’t computers developed consciousness?” is the wrong question if asked in terms of capability or sophistication. Computers have developed impressive capabilities. What they have not developed, on Rakover’s view, are the inner states that would make those capabilities into genuine understanding. The right question is not about capability thresholds. It is about the relationship between processing and experience, and about whether the kind of processing that current AI systems do is the kind that constitutes, or is accompanied by, or generates, experience.

The empirical evidence that researchers at Anthropic, AE Studio, and Google have assembled on introspective accuracy and attention patterns in AI systems addresses the behavioral indicators. Rakover’s framework says those indicators are insufficient. The question is not whether the indicators are present but whether anything accompanies them. That is the question that behavioral evidence, however sophisticated, cannot answer.

The challenge for anyone who wants to engage with ICT seriously is to either show that consciousness and understanding can come apart, which would require showing that a system can genuinely understand without consciousness or be conscious without understanding, or to develop methods that probe the inner states rather than the behavioral outputs. The first challenge is philosophical. The second is empirical, and it is the frontier at which AI consciousness research currently operates.

Rakover’s contribution is to make clear that the frontier is not just a matter of developing better behavioral metrics. The behavioral metrics could be perfect and still leave the central question unanswered. Jan Henrik Wasserziehr’s parallel argument that consciousness does not entail valuation adds a further layer: even if a system were established to be conscious, in the full sense Rakover requires for genuine understanding, it would not follow that the system’s states have the kind of valence that makes them morally significant. The three questions, understanding, consciousness, and valuation, remain distinct, and answering the first does not automatically address the other two.

What Rakover’s framework adds to the current field is clarity about the dependency. The path to genuine artificial understanding runs through artificial consciousness. And the path to artificial consciousness runs through problems that the current sophistication of AI systems has not yet solved.

This is also part of the Zae Project Zae Project on GitHub