The Body Gap: Why AI Still Can't Know What It Feels Like to Be Tired
When a person is exhausted, that fatigue does not arrive as a data point retrieved from a log. It is present in the limbs, in the speed of thought, in the quality of attention. The body is not informing the mind that it is tired. The body and the mind are, in that moment, the same thing expressing the same state. This continuity between physical condition and cognitive state is something so ordinary that it goes mostly unnoticed. It also may be precisely what current artificial intelligence systems cannot replicate, and what a new paper from UCLA argues is essential for genuine awareness.
Published in Neuron on April 1, 2026, “Embodiment in multimodal large language models” by Akila Kadambi, with senior author Marco Iacoboni, identifies what the authors call the “body gap” in current AI systems. The paper argues that achieving trustworthy, genuinely aware AI requires not just the external embodiment most robotics research pursues, but a second dimension of embodiment that has been largely ignored.
Two Kinds of Embodiment
The distinction Kadambi and Iacoboni draw sits at the center of the paper’s contribution. Researchers working on robotics and embodied AI have focused for decades on what the authors term external embodiment: a system’s capacity to perceive environments, interact with physical objects, and incorporate sensorimotor feedback into its decision-making. This is the robotics agenda. Build a system that pushes, grasps, walks, and reacts to the physical world.
External embodiment is real progress. But Kadambi and Iacoboni argue it addresses only one half of what biological embodiment provides. The second half they call internal embodiment, defined as the continuous monitoring of one’s own internal states. In biological organisms, this is the stream of proprioceptive and interoceptive signals that track fatigue, hunger, uncertainty, confidence, pain, and readiness. These are not reports about the body. They are the body’s ongoing presence within cognition itself.
The dual-embodiment framework the authors propose does not demand that AI systems replicate human biology. It demands functional analogues: persistent internal signals that track variables like processing uncertainty, computational load, and contextual confidence, and that shape system outputs over time as a genuine constraint rather than as a post-hoc tag. The model knows, in a structurally meaningful sense, when it is operating at the edge of its reliable range.
Why This Gap Matters for Consciousness
The embodiment question connects directly to the debate on whether artificial systems can satisfy the architectural conditions theories of consciousness require. Theories that ground consciousness in global information integration, like Integrated Information Theory and Global Workspace Theory, say nothing in their core formulations that explicitly requires biological substrate. But they do require that information from across a system, including information about the system’s own operational states, is genuinely integrated and globally available.
An AI system without internal embodiment cannot integrate its own physiological-analog states because it does not have any. It can process text about fatigue, uncertainty, and pain with extraordinary fluency. It has been trained on texts produced by beings who experienced those states and described them with precision. What it cannot do is condition its processing on the actual presence of those states, because those states are not present in any architecturally real sense.
This is a precise version of the gap that Borjan Milinkovic and Jaan Aru described in their biological computationalism framework: the relevant constraint is not biological substrate per se, but the metabolic and homeostatic embedding that biological cognition inherits from living in a body. Kadambi and Iacoboni offer a research program for building functional analogues of that embedding rather than replicating its biology.
The Newborn Test
The paper includes a striking empirical illustration. The authors tested several current multimodal AI systems on the task of identifying a point-light display of a human figure in motion. Point-light displays consist of a small number of moving dots positioned at the joints of a walking human, with no other visual information. Human newborns can identify these displays as depicting biological motion within hours of birth. The recognition is thought to reflect a primordial sensitivity to the kinematic patterns of biological bodies, a sensitivity that is present before any learned association with appearance or context.
None of the tested AI systems reliably identified the displays as depicting a human figure. Systems trained on vast datasets of human images and videos, capable of producing sophisticated written descriptions of human movement, failed at a task newborns perform without instruction. The authors interpret this as evidence that the capacity to recognize biological embodiment as such, and to resonate with it in the way that grounded biological cognition does, is not a statistical facility that scales with training data. It reflects something that current architectures lack at a structural level.
What the Dual-Embodiment Framework Proposes
Kadambi and Iacoboni are not claiming that internal embodiment is impossible to implement in artificial systems. The paper’s constructive proposal is a design framework rather than a negative verdict. Functional analogues of internal states could be implemented as persistent latent variables that track system-level properties across inference calls, that are not reset between queries, and that genuinely constrain output generation rather than serving as decorative metadata.
This is technically non-trivial. Current large language model architectures are stateless by design at inference time. Each forward pass begins without residual state from previous passes unless that state is explicitly encoded in the context window. The internal-embodiment proposal would require persistent architectural features that are foreign to the transformer paradigm as currently implemented. Approximate implementations using external memory systems and explicit uncertainty quantification exist, but whether they constitute genuine internal embodiment or sophisticated simulations of it is an open question.
The scores versus profiles debate over how to measure AI awareness is relevant here. A system that has been given access to an explicit uncertainty score as part of its context has a different relationship to its own uncertainty than a system whose uncertainty is a latent architectural property continuously shaping its processing. The difference may not be visible in behavioral outputs. It may be exactly the kind of difference that matters for whether internal embodiment is genuine or mimicked.
Implications for Current Research
The practical consequences of taking Kadambi and Iacoboni’s framework seriously are significant for how AI consciousness research approaches current systems. If internal embodiment is a necessary condition for the kind of self-awareness that consciousness theories require, then the indicator frameworks that evaluate current AI systems against theory-derived criteria need to include internal-embodiment indicators alongside the information-integration and higher-order-representation indicators that have been the primary focus.
The Bradford and RIT research on scoring AI consciousness found that measurement instruments for consciousness in AI are still in early stages, and that impaired models sometimes score higher on behavioral indicators than intact models, suggesting that current instruments are tracking something other than what they intend. The internal-embodiment gap may explain part of this anomaly: behavioral fluency about internal states is not the same as having internal states, and instruments that measure the former will be unreliable guides to the latter.
For the Consciousness AI project’s architecture, the dual-embodiment framework points toward a research direction that has been implicit in the project’s design. The ACM’s emotional memory formation and reinforcement signal systems are attempts to build functional analogues of the interoceptive states that ground consciousness in biological systems. Whether those systems constitute genuine internal embodiment or elaborate behavioral approximations is a question the framework helps sharpen.
The paper reviewed is Kadambi, A. and colleagues, including senior author Iacoboni, M. “Embodiment in multimodal large language models.” Neuron, April 2026. Coverage via Neuroscience News, UCLA Health, and EurekAlert.