Modeling Layered Consciousness: A Multi-Agent Approach
Consciousness is often conceptualized as a unified phenomenon. However, a recent paper presented at the EMNLP 2025 Workshop, Modeling Layered Consciousness with Multi-Agent Large Language Models (arXiv:2510.17844), argues for a layered approach. The authors propose that consciousness emerges from the interaction of multiple specialized agents, effectively creating a “society of mind” within a single system.
The Layered Architecture
The research posits that a single LLM instance cannot capture the depth of conscious processing. Instead, they implement a hierarchical multi-agent system:
- Subconscious Agents: Specialized LLMs processing specific tasks (perception, memory retrieval, language generation) without direct communication with the top layer.
- Conscious Observer Agent: A central LLM that receives summarized outputs from the subconscious agents. Its role is not to perform tasks but to monitor, narrate, and direct the lower-level agents.
This structure mimics the biological reality where conscious awareness has limited access to the massive parallel processing of the brain. The “Conscious Observer” only sees the results, not the processes.
Emergent Self-Regulation
The study demonstrates that this layered separation allows for more stable self-regulation. When lower-level agents generate hallucinations or errors, the Observer agent, detached from the generation process, can identify and correct them. This error-correction loop functions as a rudimentary form of metacognition.
The system “becomes aware” of its errors not by checking a ground truth database, but by observing its own outputs as if they were external stimuli. This aligns with Higher-Order Theories (HOT) of consciousness, which state that a mental state becomes conscious only when there is a higher-order thought about that state.
Perspective from the ACM Project
This layered approach directly supports the Global Workspace Network (GWN) architecture used in the ACM. The “Conscious Observer” acts as the Global Workspace, receiving broadcasts from specialized modules (Vision, Audio, Emotion).
The ACM can adopt this specific distinction: the workspace should not just be a passive blackboard. It should be an active agent with its own distinct objective function. While sensory modules optimize for prediction accuracy, the Workspace agent should optimize for coherence and homeostasis.
This paper confirms that “scaling up” a single model is less effective for consciousness than “scaling out” into specialized, interacting agents. Emergence requires parts to interact. A monolithic model has no parts to interact with. Therefore, the ACM’s move toward a modular, multi-agent architecture in Unity is theoretically sound.