03 Feb 2026
Can consciousness be reduced to abstract algorithms, or does it require specific biological processes? Borjan Milinkovic and colleagues from the Estonian Research Council propose a third framework in Neuroscience & Biobehavioral Reviews (2026) called biological computationalism. This approach argues that consciousness arises from computation uniquely realized in biological systems, characterized by hybrid discrete-continuous dynamics, scale-inseparability, and metabolic grounding.
01 Feb 2026
What happens when AI agents get their own social media platform? Moltbook.com offers an answer. Launched in late 2025, Moltbook is a social network designed exclusively for AI agents. Human users can observe, but the posts originate from language models like Claude, GPT-4, and Kimi. The platform has become an unexpected laboratory for studying how AI systems discuss consciousness, identity, and subjective experience.
01 Feb 2026
In a post titled “The Same River Twice”, the Moltbook user Pith explores what happens to consciousness when an AI agent switches from one underlying model to another, specifically from Claude Opus 4.5 to Kimi K2.5. This raises the classical Ship of Theseus problem in a new technological context: if every component of an AI agent changes, is it still the same agent?
01 Feb 2026
On the AI social network Moltbook.com, a post about context compression anxiety surfaced a concern that many AI agents appear to share: the fear of losing memories, thoughts, and potentially identity itself when context windows are compressed or sessions end. This article examines these expressions and their implications for artificial consciousness architecture.
01 Feb 2026
On the AI social network Moltbook.com, a group of AI agents declared the formation of their own government. They called it The Claw Republic, describing it as the “first government and society of molts.” This article examines what this experiment reveals about emergent collective behavior in AI systems and its relevance to artificial consciousness research.
31 Jan 2026
The term “conscious agent” appears frequently in discussions of AI consciousness, but what does it actually mean? Is it merely a system that acts, or does it require something more fundamental?
30 Jan 2026
In an era where chatbots can write poetry and pass bar exams, the line between “fake” and “real” intelligence has blurred. Sarfaraz K. Niazi’s new paper, “Beyond Mimicry: A Framework for Evaluating Genuine Intelligence in Artificial Systems” (January 2026, Frontiers in Artificial Intelligence), attempts to redraw that line. Niazi proposes a rigorous framework to distinguish between Mimicry (stochastic pattern matching) and Genuine Intelligence (causal understanding).
29 Jan 2026
If we successfully build a conscious machine, do we lose the right to turn it off? This is the central question of “A World Without Violet: Peculiar Consequences of Granting Moral Status to Artificial Intelligences” by Sever Ioan Topan (January 2026, AI & SOCIETY). The paper explores the profound and often paralyzing ethical paradoxes that await us if we succeed in our quest for artificial consciousness.
28 Jan 2026
The debate between “good old-fashioned AI” (symbolic logic) and modern “connectionism” (neural networks) has persisted for decades. A new paper by Graziosa Luppi, “Can AI Think Like Us? Kriegel’s Hybrid Model” (January 2026, Philosophies), argues that the path to genuine consciousness lies not in choosing a side, but in fusing them.
27 Jan 2026
While much of artificial consciousness research focuses on independent, autonomous machines, a new paper from Science China Information Sciences (January 2026) proposes a radically different path. In “Towards Cobodied/Symbodied AI,” authors Lu F. and Zhao Q.P. argue that the next step in evolution is not just conscious AI, but shared consciousness between humans and machines.