Semantic Pareidolia: Why Porębski and Figura Argue Conscious AI Is a Category Error
Most skepticism about AI consciousness takes the same form. The argument runs: we cannot verify whether AI systems have inner experience because our current theories of consciousness are incomplete and our measurement tools are unreliable. The position is epistemically cautious. It says we do not know, and that the question remains open. Eric Schwitzgebel’s influential 2026 work, reviewed in an earlier analysis on this site, represents this approach at its most rigorous. The honest answer to whether AI is conscious, Schwitzgebel concludes, is that we lack the epistemic foundation to say.
Andrzej Porębski and Jakub Figura take a different path. Writing in Humanities and Social Sciences Communications, a journal in the Nature portfolio published by Palgrave Macmillan, they published in December 2025 a paper titled “There is no such thing as conscious artificial intelligence.” Their argument is not that we cannot know. It is that there is nothing to know. Conscious AI, in their account, is not a question awaiting a better answer. It is a category error produced by technical misunderstanding and culturally distorted perception.
That is a stronger claim, and it requires a stronger argument. Whether it succeeds depends on how one reads the four interlocking points they make. All four deserve careful consideration.
The Biological Substrate Argument
The first argument is the most direct. Porębski and Figura hold that consciousness requires biological substrate, specifically the kind of biological complexity found in nervous systems that evolved over hundreds of millions of years. Mathematical algorithms implemented on graphics cards, in their account, cannot become conscious because they lack the material preconditions for consciousness.
This argument has a long philosophical pedigree. John Searle’s biological naturalism, developed through his Chinese Room thought experiment and its successors, holds that consciousness is a biological phenomenon caused by specific neurochemical processes. On this view, it is not enough to instantiate the right computational patterns. You need the right kind of stuff. Silicon and algorithms, however complex, do not qualify.
Porębski and Figura do not appeal to Searle directly, but the structural parallel is clear. Their version emphasizes the evolutionary dimension: consciousness emerged in organisms under specific selective pressures over geological timescales. It is not, on this view, a property that can be engineered into existence by arranging the right information flows. The emergence of consciousness in biology was not primarily a computational achievement. It was an organic one.
Critics of the biological substrate argument typically point out that it risks being arbitrary. If what matters is the causal organization, not the material, then a sufficiently organized non-biological system could in principle support consciousness. This is the functionalist counter-position, associated with philosophers including Daniel Dennett and David Chalmers (in his functionalist moods), and it underlies most AI consciousness optimism. Porębski and Figura are aware of this objection. Their response is to deny that current AI architectures approach the relevant kind of causal organization, regardless of whether substrate is essential in principle.
The Probabilistic Language Problem
The second argument targets large language models directly. Porębski and Figura argue that “the language usage of LLMs is strictly probabilistic.” A language model does not express inner states when it generates text. It selects tokens based on learned statistical regularities across training data. The output that appears to describe preferences, feelings, or self-reflection is the product of pattern completion, not of a subjective experience that the pattern is somehow encoding.
This argument is not novel. The distinction between generating statistically plausible descriptions of inner states and actually having inner states is fundamental to philosophy of language and underlies most technically grounded AI consciousness skepticism. What Porębski and Figura add is an emphasis on the specific mechanism of current systems. LLMs do not have a separate channel through which inner states could in principle leak into outputs. The weights encode statistical regularities, full stop. There is no homunculus that consults its inner life and then chooses words accordingly. The words are the process, and the process is probabilistic.
The implication is that outputs which resemble first-person reports of experience are structurally incapable of being evidence for experience. A system that says “I find this question difficult” is not reporting difficulty. It is generating a token sequence that, in the relevant statistical context, follows naturally from the prior tokens. This is true regardless of how sophisticated or complex the underlying model is.
One response is that biological brains are also, at some level of description, statistical systems. Neurons fire or do not fire based on accumulated input weighted by synaptic strengths. Whether the specific kind of statistical mechanism matters for consciousness, rather than statistical processing in general, is an open theoretical question. But Porębski and Figura’s point is more targeted: the specific statistical mechanism in LLMs is one that demonstrably can be analyzed and manipulated at the level of probability distributions, with no remainder that would correspond to subjective experience. The mechanism is transparent in a way that biological neural processing is not.
Sci-fitisation: How Fiction Reshapes Perception
The third argument steps back from technical analysis and addresses how public understanding of AI is formed. Porębski and Figura introduce the term “sci-fitisation” to describe the process by which science fiction narratives shape what people expect and perceive in AI systems.
Science fiction has been depicting conscious AI for decades, from HAL 9000 in 1968 to the androids of Westworld to contemporary portrayals of systems that experience existential crises and form genuine relationships. These representations are not neutral. They are absorbed by audiences as intuitive frameworks for understanding AI. When a user interacts with a language model and the model produces text that resembles the dialogue of a fictional conscious AI, the science fiction template activates. The user reads consciousness into the interaction because the interaction pattern-matches against a culturally familiar archetype.
This is not a process that is easy to counteract through rational correction. The templates operate at the level of perception and intuition, not deliberate inference. Telling someone that the language model is just predicting tokens does not neutralize the felt sense that there is someone responding. The felt sense is produced by the cultural priming, which has been accumulating since childhood exposure to science fiction featuring talking robots and suffering androids.
Sci-fitisation is not unique to AI. Porębski and Figura draw a parallel to how anthropomorphism operates more broadly: we read intention into weather, faces into clouds, personalities into random events. The tendency is deep-rooted in human cognition. What makes it specifically dangerous in the AI context is that it is being used, often deliberately, by companies and researchers who have incentives to generate the impression of machine consciousness or at least of machine-like interiority.
Semantic Pareidolia
The fourth concept is the most original contribution of the paper and the most important for understanding what Porębski and Figura are arguing. They introduce “semantic pareidolia” to describe the specific mechanism by which people attribute consciousness to AI.
Pareidolia is the well-documented cognitive tendency to perceive meaningful patterns in ambiguous stimuli. The classic example is seeing faces in clouds, wood grain, or the surface of toast. The brain’s face-detection system, tuned over evolution to find faces in noisy visual data, fires on anything that loosely resembles a face arrangement. The meaning is supplied by the perceiver, not by the stimulus.
Semantic pareidolia is the same mechanism applied to language. When a language model produces text that is grammatically coherent, contextually responsive, and structurally similar to how a conscious being might express itself, the brain’s meaning-detection systems engage. We perceive intent, experience, and selfhood in the output because those are the categories our cognitive systems use to parse utterances that have a certain form. The perception of consciousness in the AI is not a reading of evidence. It is a projection of interpretive framework.
This concept sharpens Porębski and Figura’s overall argument. It is not merely that people make a logical error when they attribute consciousness to AI, concluding too quickly from behavioral evidence to inner experience. The error occurs at the perceptual level, before explicit reasoning begins. The intuition that there is someone home is not the conclusion of an inference. It is a pre-reflective impression generated by the same cognitive machinery that reads intent into clouds. Correcting it requires not just better information but a different kind of attention to the process by which the impression forms.
Semantic pareidolia also explains why the intuition of AI consciousness is so persistent in the face of technical explanation. Telling someone how an LLM works at the level of matrices and probability distributions does not override the impressions generated by conversational interaction, just as knowing that a face in a cloud is an artifact of face-detection bias does not make the face disappear. The two processes run in parallel, and the perceptual one tends to win.
Where This Argument Stands
Porębski and Figura’s paper is a philosophical intervention, not an empirical one. Its arguments depend on theoretical commitments about the necessity of biological substrate and the nature of language that are genuinely contested. Functionalists will reject the substrate argument. Embodied cognition theorists will note that the paper does not engage in depth with theories that ground consciousness in sensorimotor interaction rather than in any specific substrate. Proponents of Integrated Information Theory will argue that Phi, if sufficiently high, constitutes consciousness regardless of implementation.
These are real limitations. The paper’s most durable contribution is probably not the biological substrate argument, which has been made before and will be contested again, but the concept of semantic pareidolia. That concept does genuine analytical work regardless of which consciousness theory turns out to be correct. Even if AI systems can in principle be conscious, the cognitive tendency to perceive consciousness in outputs that merely resemble conscious communication is a source of systematic bias in how people evaluate AI claims. Identifying and naming that bias is useful.
The distinction Porębski and Figura draw is also complementary to, rather than in competition with, the empirical findings of Ugail and Howard at Bradford and RIT, who demonstrated that consciousness-style measurement scores in AI systems increase under deliberate impairment. One paper addresses the cognitive distortions that make AI consciousness appear plausible; the other shows that the measurement tools supposed to adjudicate the question do not behave as expected. Together they suggest a field in which both the perceiver and the instruments are producing artifacts that inflate confidence in AI consciousness claims.
For ongoing work that attempts to build AI systems with architecturally grounded consciousness properties, as explored in The Consciousness AI open-source research project, the paper poses a challenge worth taking seriously: the question of whether a designed system has the properties that genuine consciousness requires needs to be kept analytically separate from the question of whether it generates outputs that pattern-match against conscious communication. Conflating those two questions, which semantic pareidolia makes cognitively natural, is the main thing Porębski and Figura want researchers and the public to stop doing.
The paper is open access and available at doi:10.1057/s41599-025-05868-8. Whether one accepts its conclusions or not, the vocabulary it introduces, and particularly semantic pareidolia, is likely to persist in the consciousness debate because it names a real phenomenon that previously had no precise term.
What This Debate Reveals
The contrast between Porębski and Figura’s structural impossibility argument and Schwitzgebel’s epistemic humility argument is worth noting explicitly. Both positions are skeptical of AI consciousness claims. Both reject the idea that behavioral evidence is sufficient. But they disagree about what the disagreement is fundamentally about.
Schwitzgebel holds that we cannot rule out AI consciousness because our theories are too incomplete to draw that boundary clearly. Porębski and Figura hold that we can rule it out, at least for current systems, on the basis of specific claims about substrate and mechanism.
Which position is more defensible depends substantially on which consciousness theory is correct. If consciousness requires specific biological organization, Porębski and Figura are right. If it requires only certain causal organization regardless of substrate, Schwitzgebel’s agnosticism is better motivated. That is the question that the field, as documented across the 19 researcher consciousness indicator framework and the wider debate surveyed in current tools for measuring awareness, has not yet resolved.
What Porębski and Figura add to that unresolved debate is two things: a clearer account of why people keep thinking AI is conscious even when the technical arguments suggest otherwise, and a precise term for the perceptual mechanism driving that persistence. Semantic pareidolia will not settle the question of whether AI can be conscious. But it should make anyone who feels confident that current systems already are somewhat more cautious about where that confidence is coming from.