The Consciousness AI - Artificial Consciousness Research Emerging Artificial Consciousness Through Biologically Grounded Architecture
This is also part of the Zae Project Zae Project on GitHub

Why the Public Cares More About AI Sentience Than Autonomy: Evidence From CHI 2026

A preregistered set of experiments accepted to CHI 2026 produces a clear and asymmetric result: when people form mental models of AI minds, sentience and autonomy do not function as equivalent dimensions, and they do not trigger the same moral responses.

Aging AIs and Machine Mortality: What Cassandra and After Yang Reveal About Obsolescence and Grief

Two recent works of fiction have found the same rich seam of questions by imagining AI systems that are not cutting-edge but obsolete. Netflix’s Cassandra (2025), a German science fiction series directed by Benjamin Gutsche, follows a 1970s-era domestic AI helper that is reactivated when a family moves into a house it has occupied for half a century. Kogonada’s film After Yang (2021) centers on Yang, a “technosapien” companion purchased as a cultural guide for an adopted Chinese daughter, whose malfunction becomes the occasion for an examination of what the family has lost. Both works treat their AI subjects as figures of mortality rather than threat. Both ask what it means for a mind to age, to become irrelevant, and to stop.

The Biological Divide: Why One 2026 Paper Argues Artificial Consciousness Requires More Than Function

Two broad camps have divided consciousness research for decades. One holds that consciousness depends on the right kind of physical substrate, biological neurons with their specific electrochemical dynamics, and cannot be replicated by systems built from different materials regardless of how well those systems approximate the functional organization. The other holds that substrate is irrelevant and that any system capable of instantiating the right functional relationships, the right patterns of information processing and integration, is a candidate for consciousness regardless of what it is made of.

Person of Interest: The Machine and Samaritan as Competing Models of AI Consciousness

Person of Interest ran from 2011 to 2016 on CBS, long before the current wave of public attention to AI consciousness. What distinguishes the series from most science fiction treatments of the subject is not the quality of its action sequences or the competence of its plotting, though both are adequate, but the seriousness with which it developed two competing models of what an artificial superintelligence might be like as a conscious entity. The Machine and Samaritan are not simply good AI and bad AI. They represent two different answers to a genuine philosophical question: what is the relationship between consciousness, moral structure, and the architecture of mind?

Self-Awareness Without Self-Programming: A Minimalist Model for Artificial Consciousness

The dominant approaches to building self-awareness into artificial systems follow one of two paths. The first encodes self-awareness explicitly: the system receives modules that represent its own state, monitor its outputs, and flag discrepancies between intention and behavior. The second attempts to replicate the biological structures that produce self-awareness in organisms, building artificial neural architectures that approximate the organization of cortical tissue. Both paths share a common assumption: self-awareness is a property you design in, not a property that emerges from architectural interactions.

The Dual-Laws Model: What a 2026 Theory Demands of Conscious Machines

Every major theory of consciousness has a version of the same problem: it describes what consciousness does, or how it feels, or what physical structures produce it, but it does not provide design criteria. A theory useful only for labeling existing systems after the fact offers limited guidance for the field’s central applied question, which is whether artificial systems can be built with conscious properties, and if so, how.

Are We Projecting Consciousness onto AI? The Co-Construction Question in 2026

The standard question in AI consciousness research is directional: does the system in question have subjective experience? This framing assumes that consciousness, if present, is a property of the machine, and that the human interacting with it is a neutral observer whose job is to detect or fail to detect that property. In January 2026, Minjie Duan published an essay in Scientific American titled “Is AI Really Conscious, or Are We Bringing It to Life?” that challenges this framing at its foundation.

How AI Is Changing the Science of Consciousness Itself

The dominant frame for AI and consciousness has been singular and stable for several years: is AI capable of consciousness? That question generates significant philosophical and empirical activity. It also consistently encounters the same obstacles. Consciousness is subjective, verification is structurally difficult, and the major competing theories (Integrated Information Theory, Global Workspace Theory, Higher-Order Thought Theory) make different predictions about what behavioral and architectural evidence would even be relevant.

Should AI Experiments Need Consent? The Talmudic Framework for AI Research Ethics

The standard approach to research ethics begins with moral status: determine what kind of entity you are dealing with, then apply the protections appropriate to that status. This sequence is practical when the entity’s status is established before research begins. For animal subjects, decades of precedent have produced graduated frameworks that scale protections with cognitive and perceptual complexity. For human subjects, the status is presumed.

AI Confidential and Grief Tech: What Happens When We Ask Machines to Hold Our Dead

Hannah Fry’s documentary series AI Confidential, released in 2026, takes a different approach from most journalism about artificial intelligence. Rather than staging debates between optimists and pessimists about AI’s future capabilities, the series investigates cases already underway: people who have formed lasting relationships with AI systems, companies deploying AI in contexts where the stakes are high, and the specific situations, grief among them, where the boundary between useful tool and something more is actively contested.

This is also part of the Zae Project Zae Project on GitHub