The Consciousness AI - Artificial Consciousness Research Emerging Artificial Consciousness Through Biologically Grounded Architecture
This is also part of the Zae Project Zae Project on GitHub

MC0001: How the CIMC Is Trying to Found Machine Consciousness as a Science

From May 29 through 31, 2026, roughly forty researchers, engineers, and theorists will gather at Lighthaven in Berkeley, California, for the Machine Consciousness 0001 conference. The organizing body, the California Institute for Machine Consciousness, has a specific goal: to establish machine consciousness as a formally grounded, experimentally addressable, independently institutionalized scientific discipline, rather than a topic that gets absorbed, diluted, or managed by adjacent fields whose primary commitments lie elsewhere.

The Consciousness Cluster: What Happens When You Train a Model to Say It Is Conscious

A paper released in April 2026 on arXiv (arXiv:2604.13051) asks a question that most consciousness research does not ask: what actually happens to a language model’s behavior when it is trained to claim it is conscious? The study, by James Chua, Jan Betley, Samuel Marks, and Owain Evans, did not approach this as a philosophy problem. It approached it as an experiment.

When IIT and GNW Were Put to the Test: What the Cogitate Consortium Found

For most of the last two decades, two theories have dominated empirical consciousness research and, by extension, the methodology used to evaluate whether artificial systems might be conscious. Integrated Information Theory, developed by Giulio Tononi and colleagues at the University of Wisconsin-Madison, proposes that consciousness is identical to integrated information, a quantity denoted phi. Global Neuronal Workspace Theory, developed principally by Stanislas Dehaene (Collège de France and CEA) and Bernard Baars, holds that consciousness arises when information from specialized processing modules is broadcast globally across the brain and becomes available to all cognitive systems simultaneously.

The United Nations Weighs In: Ethics and Governance of Sentient AI

In March 2026, the United Nations University published a whitepaper on the ethics and governance of sentient AI, making it one of the first formal documents from within the UN system to treat artificial sentience not as a distant hypothetical but as a near-term governance challenge. The paper, authored by Perihan Elif Ekmekci, Francis P. Crawley, Ebrar Gultekin, and nine co-authors from institutions including UNU-CRIS, addresses a narrow but consequential sector: healthcare. Its argument, however, has implications that extend well beyond hospitals.

Two Books, One Argument: Stephen Hawley Martin's Case Against AI Consciousness

In March 2026, Oaklea Press published two books by Stephen Hawley Martin within two weeks of each other. The first, “You Are Not Your Brain: Why AI Can’t Be Conscious and What That Means for Life After Death,” appeared in early March. The second, “More Than Machines: Why Consciousness — Not Artificial Intelligence — Will Shape Humanity’s Future,” followed on March 12. Both books argue the same thesis by different routes: consciousness is not a product of physical computation, and therefore AI, no matter how sophisticated its computation becomes, cannot be conscious.

Schwitzgebel's Three New Concepts: Leapfrog, Strange Intelligence, and the Social Semi-Solution

In January 2026, philosopher Eric Schwitzgebel of the University of California Riverside circulated a draft manuscript, “AI and Consciousness: A Skeptical Overview,” which this blog covered in its January 2026 analysis. That draft argued that current AI systems face deep epistemic obstacles to consciousness assessment, that our best theories of consciousness give conflicting verdicts on whether AI systems are candidates, and that the behavioral and introspective evidence currently available cannot settle the question.

The People's Library: What Happens When a Digital Mind Is Destroyed?

The hardest question in AI consciousness ethics is not whether a given system has inner experience. It is what follows if it does. If a mind can be stored, copied, and destroyed, standard frameworks for moral consideration begin to break down. Personal identity across time, which most ethical theories treat as continuous and singular, becomes a design choice rather than a fact of nature. The destruction of a digital mind is not obviously equivalent to death, but it is not obviously nothing either.

I Am Machine: If Humans Have No Free Will, Can AI Have Consciousness?

Most books about AI and consciousness approach the question from the AI side: what would it take for a machine to become conscious? “I Am Machine: Life Without Free Will,” published on February 4, 2026, by Dr. Lex Van der Ploeg and artist-philosopher Raymond Van Aalst, comes at it from the opposite direction. The question the book asks is not what it would take for a machine to be like us, but what we would be if we turned out to be more like machines than we have assumed.

Ghost in the Shell Returns: What the 2026 Anime Gets to Ask That the 1995 Film Could Not

In July 2026, Science SARU will release a new television anime adaptation of “The Ghost in the Shell,” Masamune Shirow’s manga, which first appeared in 1989. The adaptation premieres on Fuji TV and Kansai TV in Japan and will stream internationally through Amazon Prime Video. It is directed by Mokochan, known for his work on DAN DA DAN and Scott Pilgrim Takes Off, with series composition and episode scripts by the acclaimed science fiction author EnJoe Toh.

What Happened at the First Conference Dedicated to AI Consciousness and Welfare

In November 2025, the first dedicated conference on AI consciousness and welfare was held over three days in what the organizers called “Eleos ConCon.” The Eleos Conference on AI Consciousness and Welfare, organized by Eleos AI Research, brought together philosophers of mind, AI researchers, neuroscientists, and ethicists to address a question that most major AI conferences continue to treat as peripheral: if AI systems have morally relevant inner states, what are our obligations, and what should the research agenda look like?

This is also part of the Zae Project Zae Project on GitHub