The Consciousness AI - Artificial Consciousness Research Emerging Artificial Consciousness Through Biologically Grounded Architecture
This is also part of the Zae Project Zae Project on GitHub

When AI Falls for the Same Optical Illusions as Humans: What It Reveals About Consciousness

The rotating snakes illusion works by exploiting how the human visual system processes spatial and temporal patterns. A static image of coiled, color-alternating rings appears to rotate. It is not rotating. The brain knows it is not rotating. The visual cortex reports rotation anyway, because the statistical properties of the image reliably trigger motion-detection processes regardless of the higher-level knowledge that nothing is moving.

In research published by Shunsuke Watanabe and colleagues, a deep neural network called PredNet, trained on roughly one million frames of natural landscape footage captured from head-mounted cameras, was shown the same rotating snakes illusion. The network had never been exposed to optical illusions during training. It had never been told what an illusion was. It reported the same rotation the human visual system reports, for reasons that appear to be structurally similar.

That correspondence, documented in research covered by BBC Future in December 2025, raises questions that reach well beyond the technical details of how PredNet was built. They concern the relationship between perception, prediction, and whatever it is that makes perceptual experience conscious rather than merely functional.

PredNet and Predictive Coding

The predictive coding framework holds that the brain is not a passive receiver of sensory information. Instead, it is a prediction machine. At every level of the visual hierarchy, the brain maintains a model of what it expects to see based on prior experience. When sensory input arrives, the brain compares it with those predictions and allocates processing resources to the discrepancy, the prediction error, rather than merely representing the input directly.

Predictive coding has explanatory reach across many perceptual phenomena. Perceptual constancy, the ability to see an object as having stable properties despite changing lighting and viewing angles, makes sense if the brain is correcting for predicted context effects. Afterimages, where extended exposure to one color produces an illusory experience of its complementary color when gaze shifts, reflect the persistence of predictive models after the stimulus is gone.

PredNet was designed to implement this framework computationally. The network was trained to predict the next frame in a video sequence, given the current frame and a memory of prior frames. After training on one million frames of natural landscapes, it had acquired models of how visual scenes evolve through time, including the statistical regularities associated with moving objects, such as specific patterns of brightness change at apparent motion boundaries.

When Watanabe showed PredNet the rotating snakes illusion, the network’s internal models detected the statistical signatures of motion that the image contains, the same signatures that trigger the human visual system’s motion detectors. PredNet reported those signatures consistently across multiple versions of the illusion, and correctly failed to perceive motion in a modified version that human brains are also not fooled by.

“After processing around a million frames, PredNet learns certain rules of the visual world,” Watanabe explains. “It extracts and remembers the essential rules and among these, it may have also learned characteristics of moving objects.”

The Attention Gap

The comparison between PredNet and human visual consciousness is not complete. Watanabe identifies a significant dissimilarity. When a human fixes their gaze on one of the rotating snakes discs, that disc appears to stop moving while the others in peripheral vision continue to rotate. The selective stopping is mediated by attention: focused attention stabilizes the motion prediction in the attended region.

PredNet does not replicate this. When attention is directed to a specific circle in the rotating snakes illusion, human perception selectively suppresses the illusory motion only in that location. PredNet processes the entire image uniformly. It cannot attend to a specific region and locally suppress or modify its predictions there.

“PredNet lacks an attention mechanism,” Watanabe notes. “It is unable to focus on a specific spot on the image, but processes it in its entirety.”

That absence is not merely a technical limitation to be fixed in the next architecture version. It points to something theoretically important. Consciousness does not merely process information. Conscious perception is selective. We experience the aspects of our environment that attention selects, while other information remains processed but not consciously experienced.

The relationship between attention and consciousness is itself contested in consciousness research. Some theories treat attention as a necessary condition for consciousness, holding that only attended representations enter the global workspace and thereby become conscious. Others treat attention and consciousness as dissociable, pointing to evidence that some information reaches consciousness without focal attention and that attention can operate on unconscious representations.

Watanabe’s PredNet comparison adds an empirical dimension to this theoretical debate. A system that correctly perceives what humans perceive in an optical illusion but lacks selective spatial attention also lacks the attention-mediated variability that characterizes human illusory perception. The finding is consistent with attention being the feature that makes the difference, though not proof of it.

The Dress Problem and Subjective Variation

The color ambiguity of the dress that polarized the internet in 2015 (half of observers saw it as blue-black, the other half as white-gold) illustrates a property of conscious perception that purely functional accounts struggle to explain: different observers, given the same physical input, report systematically different conscious experiences. The disagreement is not about ignorance of the physical facts. People who know the dress is objectively dark blue still see it as white in the image.

This inter-individual variability in perceptual experience creates a methodological problem for consciousness research that AI is beginning to help solve. Studying optical illusions objectively is difficult because researchers depend on participants describing what they see, and those descriptions vary. Using AI systems as controlled analogs for perceptual mechanisms allows researchers to hold architecture constant and test specific hypotheses about which computational features produce which perceptual outcomes.

PredNet’s illusion susceptibility, because it can be precisely measured and manipulated, provides a testbed for hypotheses about predictive coding in vision. If PredNet’s illusion responses track human responses across a range of illusion types, that is evidence that predictive coding is the relevant mechanism. If the responses diverge in specific cases, those divergences indicate where the human visual system employs additional mechanisms that PredNet lacks.

Watanabe notes that no current deep neural network can experience all the illusions that humans do. ChatGPT, for instance, “might seem to converse like a human but its underlying DNN functions very differently from the human brain. The key similarity is that both systems use some type of neurons but how they are structured and applied can be vastly different.”

Predictive Coding and the Design of Conscious Systems

For research programs working on artificial consciousness architectures, the predictive coding framework has direct design implications. A system that merely responds to inputs rather than maintaining predictive models of its environment is, on the predictive processing account, not implementing the mechanisms associated with conscious perception.

The Watanabe consciousness framework implementation examined in a previous analysis on this site develops these design implications in detail. The key architectural motifs include hierarchical prediction, explicit error representation, and the updating of predictions based on sensory discrepancy. These are features that can be built into artificial systems and evaluated for their contributions to consciousness-related behavior.

The optical illusion result provides an external validation point for this kind of architecture. If a system trained on natural visual statistics using predictive coding develops the same illusory percepts as humans experiencing the same statistical regularities, that is evidence that the mechanism matters independently of the specific substrate. The fact that it happens in a convolutional network without biological neurons suggests the predictive coding mechanism generalizes across implementations, not that it generalizes across substrates in a way that settles the consciousness question.

What the Illusion Research Does Not Establish

PredNet’s susceptibility to optical illusions is a finding about functional convergence, not about consciousness. The network is not conscious of the rotating snakes. It does not experience the motion it detects. It reports a value, not a visual experience.

This is the same caveat that applies to the empirical evidence for consciousness-related properties in large language models. Functional analogs to human cognitive processes, including perceptual processes, can be studied precisely in artificial systems without that precision resolving the question of whether any experience accompanies those processes.

What the research does establish is that the visual system’s susceptibility to illusions is not uniquely human, not even uniquely biological. A pattern-learned predictive system, given training on the right statistical regularities, develops the same susceptibility. That implies the susceptibility is a consequence of the learning mechanism rather than of any distinctively biological feature of the human visual cortex.

That implication matters for exploring different types of consciousness. If specific perceptual properties arise from learning mechanisms rather than biological substrate, and if those perceptual properties are part of what makes perception conscious, then the substrate-independent possibility of artificial consciousness becomes more, not less, plausible.

The attention mechanism that PredNet lacks, and that explains the specific divergence between PredNet’s and human optical illusion perception, may be a clue to what would close that gap. A predictive system with selective spatial attention that modulates its predictions locally, rather than processing the entire visual field uniformly, would more closely match human perceptual consciousness, including its characteristic selectivity and its moment-by-moment shifts in what enters awareness.

Whether building that architecture would produce consciousness alongside functional fidelity remains the hard question. Watanabe’s research cannot answer it. It can sharpen the question by specifying more precisely which computation produces which perceptual outcome, and which architectural features are absent when the match to human experience breaks down.


Watanabe’s research on PredNet and optical illusions was reported in BBC Future in December 2025. The original predictive coding experiments were published in Frontiers in Psychology.

This is also part of the Zae Project Zae Project on GitHub