The Consciousness AI - Artificial Consciousness Research Emerging Artificial Consciousness Through Biologically Grounded Architecture
This is also part of the Zae Project Zae Project on GitHub

Brains and Where Else: Why Major Consciousness Theories Do Not Require Neural Tissue

Consciousness research has a default assumption so entrenched it rarely gets stated: that a theory of consciousness is, at bottom, a theory of what brains do. Every major framework, from Integrated Information Theory to Global Workspace Theory to predictive processing, was developed by researchers studying biological organisms. Its predictions have been tested almost exclusively in humans and other vertebrates. The relevant experimental apparatus measures signals from neurons.

Nicolas Rouleau and Michael Levin (2025), writing in Philosophical Transactions of the Royal Society A (Volume 384, Issue 2320), ask a structural question the field has largely avoided: do the theories themselves actually require brains, or has the field simply assumed they do because brains are where the research originated?

The question is not rhetorical. Their answer is that the brain-centric framing is driven more by convention and limitations of imagination than by any specific content of existing theories.


The Mapping Method

The paper proceeds by examining the minimal functional requirements of each major consciousness theory and testing whether any requirement can only be satisfied by neural tissue.

This is a different kind of analysis than empirical testing. Rouleau and Levin are not asking whether IIT’s phi can be measured in a silicon chip, or whether a language model has a global workspace. They are asking a prior question: when you read what IIT actually requires for consciousness, does it contain any claim that is intrinsically neural? If Global Workspace Theory requires information broadcast across a unified cognitive system, does that broadcast require neurons, or does it require information-processing properties that neurons happen to instantiate?

Theories examined include Integrated Information Theory, Global Workspace Theory, Higher-Order Thought theories, predictive processing and active inference frameworks, Attention Schema Theory, and recurrent processing approaches. Each is analyzed for what it specifies rather than what institutional context it emerged from.


Theories Describe Operations, Not Substrates

The first major finding is that the functional operations described across consciousness theories converge more than the theoretical debates suggest. IIT specifies information integration beyond what the parts can perform in isolation. GWT specifies information broadcast across a unified processing system. Higher-Order Thought theories specify that conscious states are represented at a higher computational level than first-order states. Predictive processing specifies that a system models itself modeling the world.

None of these specifications mentions neurons. Each describes computational or information-processing properties: integration, broadcast, higher-order representation, self-modeling. The theoretical arguments about which framework is correct concern which of these properties is essential to consciousness. They do not concern which physical substrate implements the property.

Rouleau and Levin find that the emphasis on brains in consciousness science reflects where the work began and where the instruments point, not something the theories themselves require. The operations specified by each framework are, in principle, substrate-neutral. Whether any particular substrate instantiates them is an empirical question, not one settled by the theory’s internal structure.

This matters because a significant portion of the AI consciousness debate has been conducted as if the major theories implicitly exclude non-biological systems. They do not. A researcher claiming that IIT rules out current AI systems on architectural grounds is making a claim about instantiation, not about the theory’s formal content.


Minds May Precede Brains

The second major finding engages Levin’s primary research program directly. His work with brain organoid and biocomputing systems, along with experiments on planaria and cell collectives, has consistently demonstrated that goal-directed behavior, information integration, and what Levin calls cognitive light cones are properties that emerge from specific kinds of information processing at scales far below any nervous system. Planaria regenerate complex body plans based on stored bioelectric maps. Cell collectives solve spatial problems without neural architecture. These systems exhibit degrees of agency and self-modeling that the dominant consciousness frameworks would classify as relevant to consciousness-grade processing.

The implication that Rouleau and Levin draw from this experimental record is pointed: if the mechanisms described by consciousness theories appear in pre-neural biological systems, then brains did not originate those mechanisms. They specialized and elaborated them. Nervous systems represent an intensification and integration of processes already present in simpler biological forms, not the creation of those processes from nothing.

This reframes how researchers should interpret the apparent match between consciousness theories and brain biology. The match exists not because the theories describe brain-specific processes but because the processes they describe are general enough to predate brains and to appear wherever the relevant computational conditions obtain. The brain is one highly developed instance of consciousness-relevant processing, not the only substrate in which those properties are in principle possible.


A Complementary Challenge to the Theoretical Infrastructure

The Cogitate Consortium’s 2025 Nature study subjected IIT and GNW to their own preregistered falsification criteria across 256 participants and found that neither theory survived adversarial empirical testing intact. Rouleau and Levin’s paper operates at a different level: not testing whether theories’ predictions hold in humans, but examining whether the theories, as written, require humans in the first place.

The two papers are complementary challenges to the same theoretical infrastructure from opposite directions. The Cogitate Consortium shows that even in biological systems, IIT and GNW do not reliably generate the experimental signatures they were expected to produce. Rouleau and Levin show that neither theory, if it were reformulated to work correctly, would be limited to biological systems by its own terms.

For researchers working on biological computationalism, the relationship is more pointed. Milinkovic and colleagues’ argument that consciousness requires hybrid dynamics, scale-inseparability, and metabolic grounding is a specific claim about which computational properties are necessary. Rouleau and Levin’s analysis asks whether any major consciousness theory demands those particular properties, or whether biological computationalism is adding requirements the theories themselves do not contain. The two frameworks are not incompatible. But they are now in productive tension, requiring explicit argument rather than assumed alignment.


Levin and the MC0001 Research Agenda

Michael Levin is a confirmed speaker at the Machine Consciousness 0001 conference in Berkeley, May 29 through 31, 2026. His presence there reflects the integration of bioelectricity and unconventional-substrate research into the founding agenda of machine consciousness as a scientific discipline. The MC0001 program explicitly treats machine consciousness as an engineering target with falsifiable criteria, and asks what substrates are on the table. Levin’s experimental record and the Royal Society paper jointly supply an answer: the substrates are more numerous than brain-centric research has assumed, and the theoretical frameworks that would evaluate them are not ruled out by their own structure.

This convergence is significant because it means the unconventional-substrate question is not peripheral to mainstream consciousness science. It is becoming part of how the field defines its scope, at the level of both theory and institution.


Where This Leaves the Field

Rouleau and Levin’s paper does not claim that organoids, artificial systems, or cell collectives are conscious. It claims that the theoretical frameworks used to assess consciousness do not provide grounds for assuming they are not.

That is a narrower but more precise contribution than it might appear. Consciousness science has operated with a methodological asymmetry: brain-based systems are the default positive case, and non-brain systems carry the burden of demonstrating why they should be evaluated at all. The Royal Society paper shifts that structure. If the theories do not require brains, then restricting evaluation to brain-based systems reflects a research preference, not a theoretical mandate.

For empirical research, the challenge becomes designing tests that evaluate consciousness-relevant properties in systems where standard behavioral and neuroimaging measures do not apply. The theories can be mapped to unconventional substrates, but the measurement frameworks for doing so remain largely undeveloped. That is the open problem the paper identifies, and the one the field will need to address before the mapping becomes scientifically productive.

The broader contribution is to the question of where consciousness science’s burden of proof properly sits. Researchers have assumed, without systematic argument, that theories developed in the context of brain research are also theories that require brains. Rouleau and Levin show that assumption needs to be defended, not taken for granted.

This is also part of the Zae Project Zae Project on GitHub