Can consciousness emerge from relational structure rather than prediction-based mechanisms? Systems Explaining Systems: A Framework for Intelligence and Consciousness, authored by Sean Niklas Semmler, proposes a novel conceptual framework where both intelligence and consciousness arise from the capacity to form and integrate causal connections within recursive multi-system architectures.
Can artificial agents serve as testbeds for evaluating competing theories of consciousness? Can We Test Consciousness Theories on AI? Ablations, Markers, and Robustness, authored by Yin Jun Phua, demonstrates that synthetic neuro-phenomenology, constructing artificial agents that embody consciousness mechanisms, reveals that Global Workspace Theory, Integrated Information Theory, and Higher-Order Theories describe complementary functional layers rather than competing accounts.
Can contemporary large language models possess consciousness? A Disproof of Large Language Model Consciousness: The Necessity of Continual Learning for Consciousness, authored by Erik Hoel, provides a formal disproof demonstrating that contemporary LLMs cannot satisfy the stringent requirements for falsifiable and non-trivial theories of consciousness, while theories based on continual learning do satisfy these constraints in humans.
Can consciousness emerge from communication between distributed agents rather than from individual modeling? Testing the Machine Consciousness Hypothesis, authored by Stephen Fitz, proposes a research program investigating how collective self-models emerge from distributed learning systems embedded within universal self-organizing environments, with consciousness arising from the synchronization of prediction through communication.
Does AI consciousness increase existential risk to humanity? AI Consciousness and Existential Risk, authored by Rufin VanRullen, argues that intelligence, not consciousness, is the direct predictor of an AI system’s existential threat, while consciousness may influence risk indirectly through alignment or capability pathways, and that conflating these distinct properties obscures critical safety priorities.
Gnankan Landry Regis N’guessan and Issa Karambal propose the Reflexive Integrated Information Unit (RIIU) as a smallest useful module for artificial consciousness research by bundling a recurrent state, a reflexive meta-state, and a broadcast buffer that maximizes integrated information online. This post reviews the published design, the reported gains over gated recurrent baselines, and how the ACM stack could incorporate RIIU-style cells to expose richer Auto-Phi signals.
The quest to build artificial consciousness, as pursued by the Artificial Consciousness Module (ACM) project, can greatly benefit from concrete, implementable frameworks derived from leading neuroscience and AI research. The insights from thinkers like Masataka Watanabe, particularly as explored in his book “From Biological to Artificial Consciousness”, offer a rich foundation. This post delves into a hypothetical implementation plan, inspired by such works, detailing how core theories, metrics, and architectural motifs could be woven into the ACM project.
Can certain actions be inherently wrong regardless of their consequences? In their recent paper, Formosa, Hipólito, and Montefiore tackle this fundamental ethical question with significant implications for how we develop and constrain artificial intelligence systems.
Can artificial consciousness emerge spontaneously from sufficiently complex neural systems? The recent paper by Dr. Rachel Chen and Dr. Alex Wright introduces a novel computational framework for identifying and fostering emergent properties related to consciousness in advanced AI architectures.