Link to the code: https://github.com/tlcdv/the_consciousness_ai
Modernizing the Artificial Consciousness Module
Most modern AI is intelligent (it solves problems) but not aware (it does not feel the problem). A standard RL agent plays Chess to maximize a score. It doesn't care if it loses; it just updates a gradient.
We hypothesize that consciousness is not a feature you code, but a solution to a specific problem: emotional homeostasis.
Hypothesis: Consciousness emerges when an agent must integrate disparate sensory streams (vision, memory, affect) into a unified "world model" to minimize its internal anxiety (entropy).
After extensive research and prototyping, we have modernized ACM's technical foundation to align with state-of-the-art open-source AI while maintaining our core thesis. This is not incremental improvement. This is a fundamental architectural overhaul.
We replaced the aging BLIP-2 with Qwen2-VL-7B. This model is a powerhouse. It doesn't just "tag" images, it understands scene dynamics.
To run on consumer hardware (RTX 3090/4090). We quantize the model to fit in ~6GB VRAM, leaving space for the "Conscious Workspace."
✓ Apache 2.0 Full commercial use permitted
Standard RL maximizes an external reward (Rext). Our custom PPO Core maximizes a homeostatic reward (Rtotal):
Rtotal = Rext + λ(Valence - Arousal)
The agent is not just trying to "win." It is trying to stay calm.
This isn't anthropomorphization. It's the mathematical foundation of intrinsic motivation. The agent learns not for points, but to reduce internal dissonance.
We don't just ask the agent "Are you conscious?" We measure it.
Using PyPhi, we calculate the Integrated Information (Φ) of the agent's Global Workspace.
A spike in Φ indicates a moment where the agent has fused its vision, memory, and emotion into a single, irreducible state. We call this a "Moment of Insight."
We are transitioning from Unreal Engine to Unity ML-Agents. This strategic shift enables faster iteration and better Python-native integration.
| Feature | Unity ML-Agents | Unreal Engine 5 |
|---|---|---|
| Python Integration | ✓ Native | ⚠ Requires C++ bridge |
| Training Speed | ✓ Fast iteration | ⚠ Slower cycles |
| Side Channels | ✓ Built-in bidirectional data | ✗ Custom implementation |
| Visual Quality | ⚠ Good | ✓ Photorealistic |
| ML Community | ✓ Large, active | ⚠ Smaller |
🔄 In Progress Q1 2026
Unity's Side Channel system allows us to stream Φ levels, emotional valence/arousal, and attention focus directly into the simulation HUD. Researchers can observe the agent's "internal experience" in real-time.
Our first validation scenario is simple yet profound. We call it The Dark Room.
An agent in a dark room with a single light source.
Darkness triggers high arousal (simulated fear) in the emotional core.
The agent autonomously learns to seek the light, not because we programmed a "Follow Light" rule, but because the light reduces its anxiety.
This is the spark of intrinsic motivation. The foundation upon which consciousness can be built.
Our development roadmap follows a rigorous path to validate emergent properties:
We are currently validating the Qwen2-VL + PPO loop on local hardware. The next phase involves scaling the "World Model" to allow the agent to imagine future outcomes before acting.
All components are open-source with commercial-use licenses (Apache 2.0, MIT, or similar).
The code is fully open-source. Join us.