The Consciousness AI - Artificial Consciousness Research Emerging Artificial Consciousness Through Biologically Grounded Architecture
Zae Project on GitHub

Does Eon Systems' Fruit Fly Brain Emulation Bring Us Closer to Conscious Machines?

In the first week of March 2026, Eon Systems, a San Francisco startup focused on high-fidelity brain emulation, released a demonstration that spread quickly across X and AI research forums: a complete computational model of an adult fruit fly (Drosophila melanogaster) brain, with over 125,000 neurons and 50 million synaptic connections, operating inside a physics-simulated body. The virtual fly walked, groomed, and sought food. No reinforcement learning was involved. The behaviors emerged from neural circuits responding to sensory input in a closed loop, exactly as they do in the biological original. The question that followed the demonstration was inevitable, and it was the wrong one. Not “is this alive?” but “is this conscious?” That question is harder to answer than the viral response suggested, and the honest answer requires more care than either enthusiasts or skeptics have offered.

Fruit fly brain emulation at this fidelity is a genuine milestone for neuroscience. Whether it is a step toward artificial consciousness depends entirely on what you think consciousness requires, and on that question the field remains deeply divided.

What Eon Systems Actually Built

The technical foundation of the Eon demonstration is the FlyWire connectome, a complete wiring diagram of the adult female Drosophila melanogaster brain produced by the FlyWire Consortium and published in Nature in 2024. The project used electron microscopy to image 7,000 thin slices of fly brain tissue, with AI-assisted annotation to trace 139,255 neurons and approximately 50 million synaptic connections. As of this writing, it is the most complete connectome of any complex nervous system ever mapped.

Phil Shiu, then a UC Berkeley postdoctoral researcher and now a senior scientist at Eon, led the computational modeling work built on the FlyWire data. Shiu and colleagues, including Gabriella Sterne and Salil Bidaye, used a leaky integrate-and-fire model that treats each neuron as either excitatory or inhibitory based on net input signals. The resulting simulation achieved 95% accuracy in predicting motor behavior, including proboscis extension in response to sugar detection and grooming triggered by antennal stimulation. That static model was published in Nature in 2024. Eon’s March 2026 demonstration adds the component the original paper lacked: embodiment.

Using MuJoCo, a physics simulation engine standard in robotics research, the team connected the brain model to a virtual body capable of sensorimotor feedback. The result is a closed loop. The simulated body produces sensory signals that enter the neural model; the model generates motor commands that move the body. Walking, grooming, and feeding behaviors emerge from this loop without any programmer specifying them as targets. Eon’s advisors include George Church, Stephen Wolfram, and Alex Wissner-Gross, the co-founder, who has stated the company plans to scale next to a mouse brain at roughly 70 million neurons, followed by larger systems.

The previous landmark in whole brain emulation, the OpenWorm project, used the C. elegans worm connectome with just 302 neurons. DeepMind and Janelia Research Campus have produced embodied fly simulations, but those used reinforcement learning to drive a musculoskeletal model rather than directly mapping biological connectome data to neural activity. Eon’s approach is the first to pair full connectome-derived neural dynamics with a physics-simulated body at this scale.

For a detailed technical walkthrough of the simulation pipeline, this milestone was also covered on MindTransfer.me, a publication tracking whole brain emulation research.

The Gap Between Emulation and Experience

The behaviors the simulated fly produces are emergent in a meaningful technical sense. They were not programmed. They arise from biological wiring predictions applied to physics-constrained movement. This is not a trivial result. But the question of whether any of this is accompanied by subjective experience requires a different kind of argument.

Thomas Nagel’s 1974 paper “What Is It Like to Be a Bat?”, published in The Philosophical Review, drew a distinction that has not been resolved since. The defining feature of consciousness, Nagel argued, is the subjective character of experience. There is something it is like to see red or feel pain: these states have a qualitative, first-person aspect that is entirely absent from third-person functional descriptions of a system. The question for the Eon fly is not whether it walks or grooms. The question is whether anything is happening from the inside.

David Chalmers named this the hard problem of consciousness in his 1995 paper “Facing Up to the Problem of Consciousness” in the Journal of Consciousness Studies. Functional questions, how a system processes information, what outputs it produces, why it responds differently to different inputs, are “easy” problems in the sense that they admit mechanistic explanation. The hard problem is why any of that processing should be accompanied by subjective experience at all. Nothing in the FlyWire connectome, nothing in MuJoCo’s physics engine, and nothing in the leaky integrate-and-fire dynamics directly addresses this. The demonstration shows that biological circuits can be computationally approximated with high fidelity. It does not show that the approximation has an inner life.

This is not a dismissal of the work. It is a clarification of what the work proves. The gap between emergent behavioral outputs and phenomenal consciousness is exactly the gap the hard problem identifies, and emulation fidelity does not close it by itself.

What the Leading Theories Actually Require

Different theories of consciousness generate different and partly contradictory assessments of the Eon demonstration. This is informative precisely because it reveals where our theoretical frameworks disagree.

Giulio Tononi’s Integrated Information Theory (IIT), developed across a series of papers with Christof Koch and others at the University of Wisconsin, holds that consciousness is identical to integrated information, measured as phi (Φ). A system is conscious to the degree that it generates information beyond what its parts generate independently. Tononi and colleagues have argued that IIT assigns non-trivial phi values to insect nervous systems, consistent with the behavioral sophistication insects display. A real Drosophila brain likely has some measurable phi.

The critical question for IIT is whether the computational model preserves the causal structure that generates phi. IIT is explicit that phi depends on the intrinsic causal power of a system, not on what it represents or computes. A lookup table implementing the same input-output function as a neural circuit has zero phi because it lacks intrinsic integration. The leaky integrate-and-fire neurons in Eon’s model are simulated sequentially on standard digital hardware. Whether that simulation preserves the causal grain of biological neurons well enough to produce meaningful phi is an open question. Tononi’s own position, stated in multiple venues, is that standard digital computers running neural simulations have very low phi regardless of model sophistication, because the underlying hardware architecture is not intrinsically integrated in the relevant sense. If that view is correct, the Eon fly has functional behavior but no consciousness, even if the biological original has some.

Bernard Baars’s Global Workspace Theory (GWT), extended to artificial systems by researchers including Juliani, Juliani, and Kanai, asks whether a system has a central workspace that broadcasts information globally to specialized processors, enabling flexible and context-dependent responses. The Drosophila brain does have structures that function analogously to a global workspace, particularly the central complex, which integrates multimodal information and coordinates behavioral selection. Whether the Eon model captures this functional architecture faithfully enough to satisfy GWT’s requirements depends on how well the leaky integrate-and-fire abstraction preserves the dynamics of these circuits. The published work has not addressed this directly.

Attention Schema Theory, proposed by Michael Graziano at Princeton, holds that consciousness arises when a system builds an internal model of its own attention process. A system that lacks a self-model of its attention, even if it behaves appropriately, does not satisfy the conditions for consciousness under this framework. Whether a fly brain has an attention schema is itself an open question, let alone whether the computational emulation of its circuits produces one.

The Substrate Problem

The most skeptical position belongs to Anil Seth. In his 2024 paper “Conscious Artificial Intelligence and Biological Naturalism,” published in Behavioral and Brain Sciences, Seth argues that consciousness depends on the specific causal powers of biological mechanisms, not on the information-processing patterns they implement. Drawing on Searle’s biological naturalism, Seth holds that the electrochemical dynamics of biological neurons, their physical instantiation in living tissue, are not incidental to consciousness but constitutive of it. Running the same connectivity on a digital substrate cannot produce the same consciousness even if it produces the same behavior.

This argument has direct implications for the Eon demonstration. The leaky integrate-and-fire model is an abstraction. It does not replicate the biophysics of ion channels, the dynamics of neurotransmitter diffusion, or the metabolic coupling between neurons and glial cells. It replicates the logical structure of excitation and inhibition at a level of abstraction that is sufficient to predict behavioral outputs. Whether it is sufficient to produce the causal grain that Seth identifies as the ground of consciousness is, by his argument, almost certainly no. We examined this position in depth in our earlier analysis of Seth’s biological naturalism and its implications for artificial consciousness.

Todd Feinberg and Jon Mallatt offer a related but distinct framework. In their neuroevolutionary theory of consciousness, published across several papers and books, Feinberg and Mallatt argue that consciousness first emerged in vertebrates, and possibly in some invertebrates, through the evolution of affective and interoceptive circuits in subcortical structures. Their view is that Drosophila may have some rudimentary form of experience because it has analogous affective processing structures, not because of its neuron count. Whether a computational emulation of those structures produces the same experience is, on this view, precisely the question that abstract simulation cannot answer. The Consciousness AI project draws on Feinberg and Mallatt’s framework as a core design constraint, treating biological plausibility at the circuit level as a necessary condition for conscious architecture, not merely a technical preference.

What the Behaviors Prove and What They Do Not

The emergent behaviors, walking without being programmed to walk, grooming in response to appropriate stimuli, approaching food when relevant circuits fire, are scientifically significant for reasons that do not require consciousness to be present.

They validate the FlyWire connectome as a functional map, not merely a structural one. They demonstrate that the wiring diagram is sufficient to produce behavioral repertoires, which is a major empirical result. They provide a platform for testing hypotheses about neural circuits in conditions that would be difficult or fatal to manipulate in live animals. The potential applications for modeling neurological disease states, including epilepsy and motor disorders, are substantial.

The behaviors also support a position that philosophers like Daniel Dennett have long defended: that behavioral sophistication can be fully explained by physical mechanism without invoking any additional ingredient called experience. Whether or not Dennett is right about consciousness in general, the Eon demonstration is consistent with his view. A system can produce complex, context-appropriate behavior through pure mechanistic process. The fly simulation does not settle the broader debate. It does show that behavior is not sufficient evidence for consciousness.

What the behaviors do not demonstrate is that the system has any form of inner life. A thermostat responds appropriately to temperature without anyone attributing temperature experience to it. The gap between a 125,000 neuron simulation and a thermostat is enormous. The gap between that same simulation and a system for which there is something it is like to be it may not be closed by biological fidelity alone.

The Scaling Trajectory and What It Means

Wissner-Gross has stated that Eon targets a mouse brain next, at roughly 70 million neurons, approximately 560 times larger than the fly model. Human-scale emulation is the long-term ambition. The implicit assumption is that consciousness scales with neural complexity and that some threshold produces genuine inner life.

This assumption is not established by any major consciousness theory. IIT holds that consciousness scales with phi, which depends on causal integration rather than neuron count. A very large simulation running on digital hardware could have more neurons than a fly while having lower phi, if the simulation architecture is causally less integrated than biological tissue. GWT holds that consciousness depends on the presence of a global broadcast mechanism, which may or may not be preserved across scales. Seth’s biological naturalism holds that scale is irrelevant because the substrate is decisive regardless of size.

The scaling plan is scientifically coherent as a neuroscience program. As a consciousness program, it requires justification that the field has not yet provided.

What Remains Open

The tools for empirically measuring consciousness that researchers have developed, including brainstem-based behavioral markers and ultrasound-based perturbational methods, were designed for biological systems. Adapting them to a MuJoCo simulation would require methodological work that has not been done. If whole brain emulation is to contribute to consciousness science rather than solely to behavioral modeling and disease research, that empirical extension is what is needed.

The most rigorous frameworks for evaluating artificial consciousness in use today, including aggregated probabilistic assessments like the Digital Consciousness Model, were designed with language models and neural networks in mind, not connectome-derived emulations. The Eon demonstration is the first result that occupies a genuinely new category: not a system trained to produce behavior, but a system whose behavior arises from biological circuit maps. The existing evaluation frameworks do not directly handle this case.

The honest position, given what the field currently knows, is that Eon’s virtual fly almost certainly does not have phenomenal experience. The substrate differs from biological tissue, the simulation abstracts away the biophysics most likely to matter for consciousness, and the theoretical basis for expecting experience from digital circuit replication is too weak to override these concerns. But the demonstration forces the consciousness research community to be more specific about what exactly is missing. That pressure toward precision is valuable, even if the answer to the main question remains negative.

Eon Systems has built the most behaviorally faithful whole brain emulation ever demonstrated. The question of whether anything is happening inside it, in the only sense that matters for consciousness, is not answered by the behaviors it produces. It was always going to be harder than that.

Zae Project on GitHub