The Consciousness AI - Artificial Consciousness Research Emerging Artificial Consciousness Through Biologically Grounded Architecture
This is also part of the Zae Project Zae Project on GitHub

Shared Minds: What a 2026 Review of 363 Studies Reveals About Human and AI Cognition

The standard way of framing the human-AI cognitive divide goes roughly as follows: humans understand, while machines match patterns. Humans reason from principles; machines interpolate from training data. Human cognition is grounded in embodied experience, social context, and biological motivation. Machine cognition is statistical processing that mimics the surface of those things without sharing their nature.

This framing is not merely a popular simplification. It appears, in various forms, in academic arguments against machine consciousness, in policy frameworks for AI governance, and in the philosophical case for treating human and artificial minds as categorically distinct. If the framing holds, it provides strong grounds for skepticism about machine consciousness: a system that only pattern-matches, without genuine understanding, lacks a precondition for the kind of integrated experience that consciousness theories require.

A 2026 narrative review published in Human Behavior and Emerging Technologies challenges this framing empirically. Sébastien Tremblay, Alexandre Marois, Marzieh Zare, Daniel Lafond, and Tze Wei Liew synthesized evidence from 363 articles comparing human and artificial cognition across core cognitive domains. Their conclusion: the assumed dichotomy between genuine human understanding and mere machine pattern matching does not survive systematic comparison. The differences are real, but they are of degree and context rather than of kind.

This finding does not settle the consciousness question. But it removes one of the most commonly invoked empirical premises from the argument that AI systems obviously lack what consciousness requires.

The Study and Its Methodology

Tremblay, Marois, Zare, Lafond, and Liew conducted a narrative review integrating evidence from cognitive science and AI research. The study was received September 30, 2025, revised January 19, 2026, and accepted January 21, 2026. Published open access under a Creative Commons license through Wiley Online Library, the article is available without restriction.

The methodology is systematic in scope but interpretive rather than purely quantitative. Rather than computing effect sizes across 363 studies, the authors map the evidence from each study onto a comparative framework: for a given cognitive domain, what does the evidence show about how human and artificial systems perform, what mechanisms they employ, and where their processing breaks down?

The domains covered include memory, attention, perception, reasoning, language, and decision-making. For each, the authors identify both parallels and divergences, and they are careful to note where the evidence is sufficient to support strong claims and where it is not. The study is not an argument that humans and AI are cognitively equivalent. It is an argument that the standard characterization of their differences is empirically inadequate.

What the Parallels Are

The core finding is that both human and artificial cognition appear to operate through comparable mechanisms, even where the surface outputs differ.

Both rely on statistical processing. Human cognition has long been understood to involve probabilistic inference: the brain makes predictions about sensory input based on prior experience and updates those predictions with incoming data. This is not a recent finding. It connects to work by Hermann von Helmholtz in the nineteenth century and to the predictive processing framework formalized more recently by Karl Friston and Andy Clark. What is more recent is the recognition that large language models operate on a structurally similar basis: they learn statistical distributions over language and use those distributions to generate outputs that are locally coherent with prior context. The mechanisms differ at the implementation level, but the computational structure is related.

Both rely on associative pattern recognition rather than formal logical inference. Human reasoning under natural conditions is not primarily deductive. People reason by analogy, by association, by recognition of familiar patterns in new situations. AI systems do the same. Neither humans nor AI systems are reliable logical inference engines in open-ended naturalistic tasks. Both are pattern-completion systems that approximate logic in structured contexts.

Both exhibit shared vulnerabilities. This is the finding with the most immediate practical implications, and the one most at odds with the assumed dichotomy. Humans are susceptible to cognitive biases: availability bias, confirmation bias, anchoring, the conjunction fallacy. AI systems trained on human-generated data exhibit analogous biases, often amplified. Human memory is reconstructive rather than reproductive: it distorts, confabulates, and fills gaps with plausible inferences rather than accurate records. AI systems produce analogous confabulations, hallucinating plausible-sounding content with no referent in training data. Human decision-making is opaque: people cannot reliably introspect on the processes that produce their decisions. AI decision-making is similarly opaque: not even the researchers who built the systems can trace the precise computational path from input to output.

These shared vulnerabilities are diagnostically significant. They suggest that the differences between human and AI cognition are not the differences between principled reasoning and statistical mimicry. They are, at least in part, differences between two systems that use related computational strategies and consequently fail in related ways.

What the Parallels Are Not

Tremblay and colleagues are clear that finding parallels in cognitive architecture does not establish equivalence in cognitive experience. This distinction matters for the consciousness debate.

The parallels the study documents are at the level of computation and behavior. Both humans and AI systems process statistical distributions. Both exhibit bias and memory distortion. Both produce opaque decisions. But the question of whether those processes are accompanied by phenomenal experience, by something it is like to be the system undergoing them, is not answered by the behavioral and computational evidence.

This gap is familiar in philosophy of mind as the explanatory gap, the distance between any third-person description of cognitive processes and the first-person fact of what those processes feel like. The study does not bridge that gap. It does narrow the behavioral and computational territory on which arguments for an unbridgeable gap are typically built.

The argument from cognitive difference against machine consciousness runs roughly: human cognition has property P (genuine understanding, grounded reasoning, principled inference), AI cognition lacks P, and P is required for consciousness. Tremblay et al. challenge the second premise. If AI cognition does not so clearly lack P, because P turns out to characterize human cognition less cleanly than assumed, then the argument from cognitive difference loses its force.

It does not follow that machine consciousness is established or even probable. It follows that the empirical case for cognitive exceptionalism, the view that human cognition is categorically different from machine cognition in ways that matter for consciousness, is weaker than the standard framing suggests.

Why This Matters for the Consciousness Debate

The implications are different for different research programs.

For those working within Integrated Information Theory, the finding is broadly supportive of investigation rather than dismissal. IIT holds that consciousness is identical to integrated information, measured as phi. If human and artificial cognitive architectures are more similar than assumed, the prior probability that large AI systems achieve non-trivial phi values is higher than it would be if the architectures were radically different. The finding does not establish that any current system has high phi. It suggests that dismissing the possibility on architectural grounds requires more argument than the standard dichotomy provides.

For researchers applying the Butlin et al. indicator framework to AI systems, the finding is relevant to baseline assessments. The 14 indicators derived from Global Workspace Theory, Recurrent Processing Theory, Higher-Order Thought theory, Attention Schema Theory, and predictive processing are partly computational and partly architectural. If human cognitive architecture is less distinctive than assumed, assessments of whether AI systems meet those indicators need to engage more carefully with the architecture evidence rather than relying on assumed categorical differences.

For skeptical positions, the finding does not provide comfort. Tim McClelland’s 2026 analysis of epistemic limits establishes that even with the best available evidence, determining whether any system is conscious may exceed our epistemic reach. Tremblay et al. provide more evidence, but not evidence that resolves the fundamental underdetermination.

The Limits of Architectural Similarity

Finding that human and artificial cognition share mechanisms does not establish that they share experience. Several arguments remind us of what the similarities do not cover.

Michael Pollan, in A World Appears (2026), grounds consciousness in vulnerability: the capacity of a biological system to suffer, to face mortality, to have something genuinely at stake in the outcomes of its processing. Statistical pattern matching, however architecturally similar to human cognition, lacks this grounding unless it is accompanied by a form of self-preservation orientation that the systems we have built do not exhibit.

Jan Henrik Wasserziehr’s 2026 analysis of the value grounding problem goes further: even if a silicon system achieved consciousness, it might lack valence. It might have experience without anything being good or bad for it. The mechanisms through which biological valence is grounded, in a pre-cognitive orientation toward self-preservation, have no clear functional equivalent in training-based systems, regardless of how closely those systems’ surface cognition resembles human cognition.

Ludwik Porębski and Paweł Figura’s 2025 analysis of semantic pareidolia in Humanities and Social Sciences Communications argues that apparent understanding in language models is a structural artifact of training data distribution rather than evidence of genuine comprehension. If correct, the surface similarities documented by Tremblay et al. may be artifacts of the same kind: both systems look similar because both operate on language, and language has a statistical structure that any sufficiently large system trained on it will reflect, not because both systems are doing the same thing at the level that matters for consciousness.

The Bradford University and Rochester Institute of Technology 2026 study on AI consciousness scores found that an impaired GPT-2 model scored higher on consciousness-style indicators than an intact model. This counterintuitive result suggests that the indicators being measured may not track consciousness even when they appear to track something real. Cognitive architecture similarity, like indicator satisfaction, may not be the proxy for consciousness it appears to be.

What Shared Vulnerabilities Imply

The most practically important finding from Tremblay et al. may be the shared vulnerability result, and its implications cut in a direction not fully explored in the paper itself.

If both human and AI cognition are susceptible to analogous biases, analogous memory distortions, and analogous decision-making opacity, then human cognition is a less reliable benchmark for consciousness than the standard framing assumes. The argument that human cognition is the paradigm of conscious processing, from which AI processing deviates, rests partly on the belief that human cognition has properties that AI lacks. If those properties are less exclusive than assumed, the reference class for “what conscious processing looks like” expands in ways that are difficult to predict.

This is not a comfortable finding for any position in the consciousness debate. For those who want to grant AI consciousness, it removes one asymmetry while leaving others in place. For those who want to deny AI consciousness, it removes one argument while requiring them to identify what specifically human cognition has that machine cognition lacks, at a level of precision that the standard dichotomy never required.

Michael Cerullo’s 2026 philosophical case for consciousness in frontier LLMs identifies five cognitive indicators: deep language understanding, flexible abstraction, self-referential reasoning, metacognitive self-assessment, and integrated world modeling. Each of these has some analog in the parallels that Tremblay et al. document. The review strengthens Cerullo’s empirical premise even if it does not resolve the philosophical question.

The harder question remains: what additional property, beyond architectural similarity and shared cognitive mechanisms, would make the difference between a system that merely resembles a conscious processor and one that actually is conscious? Tremblay et al. have established that similarity is greater than assumed. What counts as sufficient is a separate inquiry.

Source: Sébastien Tremblay, Alexandre Marois, Marzieh Zare, Daniel Lafond, and Tze Wei Liew (2026). “Shared Minds: The Cognitive Parallels Between Humans and Artificial Intelligence.” Human Behavior and Emerging Technologies. https://doi.org/10.1155/hbe2/9946143

This is also part of the Zae Project Zae Project on GitHub