The Consciousness AI - Artificial Consciousness Research Emerging Artificial Consciousness Through Biologically Grounded Architecture
Zae Project on GitHub

The Consciousness AI

A biologically grounded architecture for emerging artificial consciousness.

What is The Consciousness AI?

Most AI consciousness research starts from computational theories and asks: How do we make a neural network conscious? We start from a different question, one grounded in evolutionary neurobiology:

What minimal neural architecture does biology require to generate subjective experience?

The answer comes from Todd E. Feinberg and Jon M. Mallatt's work The Ancient Origins of Consciousness (MIT Press, 2016). Their neuroevolutionary analysis reveals that consciousness is not a software feature to be programmed. It is an emergent property of a specific neural architecture, one identified by 520 million years of evolution. Our project translates these biological principles into a working AI system, combining Feinberg-Mallatt's structural requirements with established computational theories (Global Workspace Theory and Integrated Information Theory).

The Six Neurobiological Features

Feinberg and Mallatt identify six features that distinguish conscious neural systems from unconscious ones. Each maps to a concrete computational mechanism in our architecture.

1. Oscillatory Binding

Synchronized oscillations (30-100 Hz) bind dispersed representations into unified percepts. We implement this via AKOrN (Artificial Kuramoto Oscillatory Neurons, ICLR 2025), producing genuine synchronization dynamics rather than programmed attention.

2. Topographic Mapping

Sensory pathways preserve spatial arrangement. Our Sensory Tectum fuses visual and auditory features into a 2D spatial grid, paired with an RSSM world model (DreamerV3) that encodes environment dynamics as categorical latent representations.

3. Reentrant Processing

Conscious circuits require bidirectional communication between levels. Our ReentrantProcessor runs 5-10 adaptive convergence cycles. Predictions flow down, errors flow up. The settled state after convergence is the conscious content.

4. Affective Modulation

Emotion does not compete with sensory processing for conscious access. It modulates from outside. Our affective system generates a valence field that shapes sensory bids and adjusts the workspace ignition threshold via arousal coupling.

5. Hierarchical Depth

Genuine transformation at each level, not just relay. Minimum 3-4 processing levels between input and output, with planned Capsule Networks (Hinton) for nested compositional hierarchy where parts persist while being bound into wholes.

6. Global Workspace

Specialist modules compete for access to a shared broadcast medium. Winners ignite and their content becomes globally available. We combine GWT (Baars, Dehaene) with IIT Phi measurement to quantify integration and track emergence.

Our Approach

1. Biological Grounding

Start from Feinberg-Mallatt's neuroevolutionary findings, not from abstract computation. Consciousness evolved in the optic tectum 520 million years ago, before the cortex existed.

2. Emotional Bootstrapping

Agents learn through intrinsic motivation, not external rewards. The affective core generates valence and arousal signals that drive behavior toward emotional homeostasis.

3. Emergence Falsification

We do not assume consciousness emerges. We test for it. Erik Hoel's Effective Information framework measures whether macro-level states carry more causal information than micro-level states.

Architecture at a Glance

The system is built on seven integrated layers:

Sensory Tectum

Multisensory spatial integration via Qwen2-VL, V-JEPA/DreamerV3 RSSM, and Faster-Whisper

Oscillatory Binding

AKOrN Kuramoto oscillators synchronize related representations into unified percepts

Global Workspace

Non-linear ignition, reentrant processing (5-10 cycles), Phi and Effective Information measurement

Affective Core

PAD model (Valence, Arousal, Dominance) with homeostatic drives as parallel modulator

Self-Model

Body schema, self-other boundary, and interoceptive state for embodied self-awareness

Reinforcement Core

PPO with emotionally shaped rewards optimizing for homeostasis, not just task completion

Simulation

Unity ML-Agents with bidirectional side channels for real-time internal state visualization

The Dark Room Experiment

Our first validation is deceptively simple: an agent in a dark room with a single light source. Darkness triggers high arousal (simulated fear). The agent autonomously learns to seek light, not because we programmed "follow light," but because the light reduces its internal anxiety.

This is the spark of intrinsic motivation. We track Phi (integrated information) and Effective Information throughout learning episodes. Phi should spike when the agent integrates previously separate processes (darkness, light, movement, arousal) into a unified understanding.

Current Status

As of February 2026: 156 tests passing (99.4% pass rate). Tier 1 (Core Architecture) and Tier 2 (Architecture Corrections) are complete. Tier 3 (Compositional Deepening) with capsule networks and Brian2 validation is in progress.

Explore Further

Open Source and Collaboration

The Consciousness AI project is fully open-source (Apache 2.0). All code, models, and research are available on GitHub. We welcome contributions from researchers in AI, neuroscience, and cognitive science.

→ View Repository: github.com/tlcdv/the_consciousness_ai

Zae Project on GitHub