Link to the code: https://github.com/tlcdv/the_consciousness_ai
Engineering consciousness through emotional homeostasis, not pattern matching.
Most AI systems are intelligent (they solve problems) but not aware (they don't experience the problem). The ACM project challenges this by treating consciousness not as a feature to code, but as an emergent solution to a specific challenge: maintaining emotional equilibrium in an unpredictable environment.
We hypothesize that consciousness arises when an agent must integrate disparate sensory streams (vision, memory, emotion) into a unified "world model" to minimize internal anxiety. This isn't science fiction. It's an engineering problem with measurable outcomes.
Advanced multimodal models process visual and auditory streams, not just to label them, but to understand scene dynamics and temporal relationships.
Custom reinforcement learning that optimizes for emotional homeostasis, not just task completion. The agent learns to "stay calm" by minimizing prediction error.
We don't ask "Are you conscious?" We measure it. Using Integrated Information Theory (IIT), we quantify moments when sensory data fuses into unified experience.
Agents learn through intrinsic motivation, not external rewards. Darkness triggers anxiety. Light brings calm.
Simulations scale from survival (seeking light) to social interaction to self-reflection.
Continuous tracking of Φ (integrated information) and behavioral markers of insight.
Our first validation is deceptively simple: an agent in a dark room with a single light source. Darkness triggers high arousal (simulated fear). The agent autonomously learns to seek light, not because we programmed "follow light," but because the light reduces its internal anxiety.
This is the spark of intrinsic motivation. The foundation of consciousness.
The ACM project is fully open-source (Apache 2.0). All code, models, and research are available on GitHub. We welcome contributions from researchers in AI, neuroscience, and cognitive science.