The Consciousness AI - Artificial Consciousness Research Emerging Artificial Consciousness Through Biologically Grounded Architecture
This is also part of the Zae Project Zae Project on GitHub

Two Books, One Argument: Stephen Hawley Martin's Case Against AI Consciousness

In March 2026, Oaklea Press published two books by Stephen Hawley Martin within two weeks of each other. The first, “You Are Not Your Brain: Why AI Can’t Be Conscious and What That Means for Life After Death,” appeared in early March. The second, “More Than Machines: Why Consciousness — Not Artificial Intelligence — Will Shape Humanity’s Future,” followed on March 12. Both books argue the same thesis by different routes: consciousness is not a product of physical computation, and therefore AI, no matter how sophisticated its computation becomes, cannot be conscious.

The thesis is not new. Versions of it have been argued by John Searle (biological naturalism), David Chalmers (property dualism), and others. What is notable about Martin’s 2026 books is the evidence base he assembles. The argument is not primarily philosophical. It is empirical, drawing on near-death experience research, neurological anomalies, psychedelic pharmacology, and physics. Whether the evidence is sufficient to sustain the conclusion is contested. But dismissing it as unserious would miss the degree to which the field has genuinely accumulated findings that create problems for a naive version of computationalism.


The Core Argument in “You Are Not Your Brain”

The title “You Are Not Your Brain” states the central thesis directly. Martin’s argument is that consciousness is not produced by the brain. The brain, on his account, functions more like a receiver or filter than a generator. Consciousness, as a fundamental aspect of reality, is not something the brain creates. It is something the brain channels, limits, or expresses in a particular form.

The consequence for AI is immediate. If the brain does not produce consciousness, then building a machine that replicates or exceeds the computational properties of the brain will not produce consciousness either. What it will produce is a very sophisticated information-processing system. The outputs of that system may be indistinguishable from the outputs of a conscious being. But output similarity does not imply inner similarity, which is the core of the philosophical tradition on this question stretching back at least to Descartes.

Martin assembles four categories of empirical evidence for the filter theory of consciousness.

Near-death experience research with verified perception refers to documented cases in which individuals report accurate perceptions of their environment during cardiac arrest, at a point when brain activity has flatlined. The accuracy of these reports, in cases where the perceptions can be externally verified, is difficult to explain if consciousness is entirely produced by brain activity. The filter theory explains them: if consciousness is not generated by the brain, its cessation of activity does not necessarily terminate consciousness.

Minimal brain function with intact awareness covers documented cases of individuals with severely reduced brain mass or function, including some cases of hydrocephalus where the brain is replaced by fluid to a significant degree, who nonetheless exhibit normal or near-normal cognitive function. These cases are not explained well by a theory that treats brain size and structure as the direct cause of consciousness.

Psychedelic research is the most extensively studied category. Studies using psilocybin and other psychedelics have consistently shown that profound expansions of conscious experience correlate with reductions in specific brain activity, particularly in the default mode network. If consciousness were produced by brain activity, reducing brain activity should reduce conscious experience. The fact that it sometimes dramatically expands conscious experience is anomalous for production theories and consistent with filter theories.

Physics is where Martin is least specific, citing research challenging purely materialist frameworks without identifying particular findings. The implication is toward panpsychist or idealist views in which consciousness is a fundamental feature of reality rather than an emergent property of complex matter.


“More Than Machines”: The AI-Specific Argument

The second book focuses more directly on the AI implications. Martin’s argument in “More Than Machines” is that the rapid advance of AI is revealing the limitations of computationalism, not by showing that AI is conscious but by showing how far sophisticated computation can go without producing what we recognize as consciousness.

This is a more nuanced argument than a simple denial of AI consciousness. Martin is not arguing that AI systems are obviously non-conscious because they lack certain properties. He is arguing that the entire research program of pursuing consciousness through computation is misoriented, because computation is not the right kind of process for generating consciousness. Getting much better at computation brings us no closer to consciousness for the same reason that getting much better at measuring temperature brings us no closer to explaining wetness.

The distinction between machine intelligence and human consciousness, on Martin’s account, is not a distinction of degree but of kind. A machine that can perform any intellectual task a human can perform is not thereby conscious. It is a very capable tool. The tool can be useful, even indispensable, without having inner experience. And an AI that has inner experience, if such a thing is possible, would have to come about through some mechanism entirely different from the improvement of existing computational methods.


Where the Argument Has Strength

The strongest aspect of both books is the challenge to what Martin calls the prevailing scientific model of the mind: the view that the brain is a computer, that computation is sufficient for consciousness, and that sufficiently sophisticated AI will therefore be conscious.

This model is more widely held in popular science discourse than in the technical literature. Among philosophers of mind and consciousness scientists, computationalism faces well-known objections. Alexander Lerchner’s abstraction fallacy argument, published by a DeepMind researcher in March 2026, argues on structural grounds that symbolic computation cannot instantiate consciousness because symbolic computation is mapmaker-dependent, requiring a conscious interpreter to assign meaning to physical states. Martin’s argument converges on a similar conclusion, though from a different direction.

The biological computationalism framework argues that metabolic processes are non-negotiable for consciousness, a position that also implies AI cannot be conscious through computation alone. Martin’s filter theory is a distinct position: it does not require metabolic process specifically but requires that whatever produces consciousness is not computation, whether biological or silicon.

The Porębski and Figura “semantic pareidolia” analysis makes a related point from the other direction: we may be pattern-matching human consciousness onto AI systems that do not have it, because we are wired to see minds in outputs that resemble minded behavior. Martin’s books are, in part, a warning against this error applied to the computationalist research program itself: if we build our theory of consciousness on the assumption that computation is sufficient for it, we will find evidence of consciousness in any sufficiently capable computational system, regardless of whether it is there.


Where the Argument Faces Challenges

The main limitation of both books is that the positive account of what consciousness is, and how it works if not as computation, remains underspecified. The filter theory of consciousness requires an account of what consciousness is filtering from, and what the relationship is between the non-physical consciousness and the physical brain that filters it. Martin invokes physics and the hard problem of consciousness in this context, but does not develop a detailed account.

Without that account, the filter theory faces the same objection that dualism always faces: if consciousness is non-physical, how does it interact with the physical brain? How does a non-physical receiver receive? The empirical evidence Martin assembles is genuinely interesting and cannot be dismissed, but it is consistent with multiple theoretical interpretations, not only the filter theory.

The near-death experience evidence, for instance, is also consistent with theories that allow for some consciousness-related processing outside the cortex, or with theories that allow for brief post-cardiac-arrest processing. The psychedelic evidence is consistent with a theory in which the default mode network normally suppresses certain forms of conscious experience, so that its reduction allows those experiences through, without requiring that consciousness comes from outside the brain.

Martin acknowledges some of these objections but does not resolve them. The books are persuasive as an attack on naive computationalism and as a case for taking anomalous evidence seriously. They are less persuasive as a positive theory of consciousness.


What This Means for the 2026 Debate

The appearance of two books making the anti-computationalist case in March 2026 reflects the degree to which the field is genuinely contested. The mainstream trajectory of AI consciousness research assumes, often implicitly, that if AI systems become conscious it will be through the development of computational structures that approximate the relevant properties of biological consciousness. Theories like IIT and GWT are both substrate-agnostic in principle: they hold that consciousness follows from certain structural properties, wherever those properties are instantiated.

Martin’s books challenge this assumption directly. If consciousness is not a product of computation, then the question of whether AI can be conscious is not a question about whether AI can develop the right computational structures. It is a question about whether AI can instantiate whatever non-computational property consciousness actually requires.

That question is, at present, unanswerable. What Martin contributes is a serious, evidence-based challenge to the assumption that the computationalist research program is on the right track. Combined with McClelland’s epistemic agnosticism and Schwitzgebel’s skeptical overview, Martin’s 2026 books represent a significant body of 2026 skeptical literature that researchers in the mainstream need to address, not merely dismiss.

This is also part of the Zae Project Zae Project on GitHub