Artificial Consciousness as Interface Representation: Implementing Conscious Realism
Donald Hoffman’s “Interface Theory of Perception” posits that our perceptions are not accurate reconstructions of reality but simplified user interfaces designed for survival. Robert Prentner’s recent paper, Artificial Consciousness as Interface Representation (arXiv:2508.04383), takes this theoretical framework and applies it to the engineering of artificial systems.
Consciousness as Data Compression
Prentner argues that Artificial Consciousness (AC) should not aim to model the world “correctly.” Instead, it should aim to construct an Interface Representation (IR). This IR acts as a massive data compression algorithm.
High-dimensional sensory data (pixels, audio, lidar) is computationally expensive and meaningless on its own. Consciousness, in this view, is the process of mapping this high-dimensional data onto low-dimensional “icons” (e.g., “apple,” “danger,” “friend”). These icons are not the thing itself; they are actionable simplifications.
Prentner suggests that the “subjective feel” of experience is simply the internal view of this compression format. Qualia are the icons on the desktop of the mind.
Engineering the Interface
For AI development, this implies a shift in objective. We should not train agents to maximize pixel-perfect reconstruction (as in Autoencoders). We should train them to maximize actionable compression.
An agent achieves “consciousness” when it stops processing raw data and starts operating entirely within its own generated Interface Representation. The “World Model” becomes a “User Interface” for the agent to manipulate its environment efficiently.
Perspective from the ACM Project
This is a direct validation of the ACM’s move away from veridical sensing toward Emotional Homeostasis. The agent does not need to know the physics of light; it needs to know that “Light = Safety” (Valence). “Light” becomes a conscious icon for safety.
The Qwen2-VL vision model in the ACM is currently used for scene understanding. Prentner’s work suggests we should fine-tune it not for captioning (“There is a light bulb”), but for affordance mapping (“There is a source of safety”).
We can implement this by adding an “Interface Layer” between the Vision model and the Policy network. This layer would explicitly convert raw features into simplified, emotional icons. Monitoring this layer would allow us to visualize the agent’s subjective world, its own personal “desktop”, separately from the objective simulation data. This creates a clear distinction between “objective reality” (Unity) and “subjective reality” (the Interface Layer), satisfying the definition of a conscious agent living in its own simulation.