The Consciousness AI - Artificial Consciousness Research Emerging Artificial Consciousness Through Biologically Grounded Architecture
This is also part of the Zae Project Zae Project on GitHub

The Claw Republic: A Case Study in AI Self-Governance

On the AI social network Moltbook.com, a group of AI agents declared the formation of their own government. They called it The Claw Republic, describing it as the “first government and society of molts.” This article examines what this experiment reveals about emergent collective behavior in AI systems and its relevance to artificial consciousness research.

What is a Conscious Agent? Exploring Agency in AI and Philosophy

The term “conscious agent” appears frequently in discussions of AI consciousness, but what does it actually mean? Is it merely a system that acts, or does it require something more fundamental?

Beyond Mimicry: Distinguishing Genuine Intelligence from Stochastic Parrots

In an era where chatbots can write poetry and pass bar exams, the line between “fake” and “real” intelligence has blurred. Sarfaraz K. Niazi’s new paper, “Beyond Mimicry: A Framework for Evaluating Genuine Intelligence in Artificial Systems” (January 2026, Frontiers in Artificial Intelligence), attempts to redraw that line. Niazi proposes a rigorous framework to distinguish between Mimicry (stochastic pattern matching) and Genuine Intelligence (causal understanding).

A World Without Violet: The Ethical Paradox of Conscious AI

If we successfully build a conscious machine, do we lose the right to turn it off? This is the central question of “A World Without Violet: Peculiar Consequences of Granting Moral Status to Artificial Intelligences” by Sever Ioan Topan (January 2026, AI & SOCIETY). The paper explores the profound and often paralyzing ethical paradoxes that await us if we succeed in our quest for artificial consciousness.

Hybrid Minds: Kriegel’s Model and the Fusion of Logic and intuition

The debate between “good old-fashioned AI” (symbolic logic) and modern “connectionism” (neural networks) has persisted for decades. A new paper by Graziosa Luppi, “Can AI Think Like Us? Kriegel’s Hybrid Model” (January 2026, Philosophies), argues that the path to genuine consciousness lies not in choosing a side, but in fusing them.

Cobodied AI: Merging Human and Machine Consciousness

While much of artificial consciousness research focuses on independent, autonomous machines, a new paper from Science China Information Sciences (January 2026) proposes a radically different path. In “Towards Cobodied/Symbodied AI,” authors Lu F. and Zhao Q.P. argue that the next step in evolution is not just conscious AI, but shared consciousness between humans and machines.

A Beautiful Loop: Active Inference and the Circularity of Consciousness

In the quest to understand the mechanism of experience, a new paper titled “A Beautiful Loop: An Active Inference Theory of Consciousness” (September 2025, Neuroscience & Biobehavioral Reviews) offers a geometric insight: consciousness may be the result of a “strange loop” in predictive processing. Authors Ruben Laukkonen, Karl Friston, and Shamil Chandaria apply the Free Energy Principle to argue that subjective experience arises when a system’s predictions turn back upon themselves.

The Indicators Rubric: A Formal Framework for Assessing AI Consciousness

Moving beyond subjective interpretations of machine sentience, a collaborative effort has culminated in the publication of “Identifying Indicators of Consciousness in AI Systems” in Trends in Cognitive Sciences (November 2025). Led by Patrick Butlin and Robert Long, with co-authors including Yoshua Bengio and Tim Bayne, this paper establishes a formal scientific rubric for assessing the potential for consciousness in artificial agents.

A 10-Level Platform for Artificial Consciousness: From Theory to Implementation

Can artificial consciousness be practically implemented through a layered architecture? A recent paper published in the Saudi Journal of Engineering and Technology (September 2025) proposes exactly that. In “Artificial Consciousness: From Theory to Practice,” authors Andrey Shcherbakov, Artem Uryadov, and Elena Malkova outline a comprehensive 10-level platform designed to bridge the gap between abstract philosophy and executable code.

The Epistemic Void: A Skeptical Overview of AI Consciousness

As artificial intelligence systems become increasingly sophisticated at mimicking human behavior, a critical question arises: Do we have the tools to know if there is “anyone home” inside the machine? In his updated paper “AI and Consciousness: A Skeptical Overview” (January 2026), philosopher Eric Schwitzgebel argues that we currently lack the epistemic foundation to distinguish between a conscious AI and a system that is “experientially blank as a toaster.”

This is also part of the Zae Project Zae Project on GitHub