The Consciousness AI - Artificial Consciousness Research Emerging Artificial Consciousness Through Biologically Grounded Architecture
This is also part of the Zae Project Zae Project on GitHub

Brain Organoids That Power Computers: Biocomputing and the Consciousness Problem

When Dr Fred Jordan holds up a dish containing small white spheres and describes them as “mini-brains” that respond to keyboard commands, the term “wetware” begins to seem less like science fiction shorthand and more like an accurate descriptor for something genuinely new. The FinalSpark laboratory in Geneva is growing clusters of human neurons from stem cells, attaching those clusters to electrodes, and integrating the resulting organoids into computing systems. The organoids respond. They adapt. Occasionally, apparently, they get annoyed.

Jordan told the BBC in October 2025 that when a journalist pressed the same key repeatedly in quick succession, the organoid’s activity graph suddenly went quiet, then produced a short distinctive burst of energy. “There was a lot they still don’t understand about what the organoids do and why,” Jordan acknowledged. “Perhaps I annoyed them.”

He meant it half-jokingly. The consciousness implications are not a joke.

What Biocomputing Actually Is

The basic technical concept is straightforward, even if the implementation is anything but. Biocomputing uses biological neurons, grown in laboratories from stem cells, as computing elements. Individual neurons and clusters of neurons, called organoids, can receive electrical signals through electrodes, process those signals through their own biological mechanisms, and produce electrical outputs that are recorded by standard computers.

FinalSpark’s process begins with human skin-derived stem cells purchased from a clinic in Japan. Through a culturing process lasting several months, those stem cells become organoids: tiny spheres of neurons and supporting cells. The organoids are then coupled to electrode arrays, creating a physical interface between biological tissue and conventional digital hardware. Jordan describes the goal as triggering learning in the neurons so they can eventually adapt to perform tasks. The analogy he uses is straightforward: “You give a picture of a cat, you want the output to say if it’s a cat.”

The motivation is partly energetic. Biological brains learn and process information orders of magnitude more energy-efficiently than current silicon AI hardware. Data centres running large language models consume vast amounts of power. A biological computing substrate that could replicate some of those functions at a fraction of the energy cost would be a significant technological advantage.

Jordan envisions data centres containing “living” servers, biological computing nodes processing AI inference tasks. FinalSpark has made progress toward this vision. Their organoids can now survive for up to four months, a significant improvement over earlier versions that only lasted days.

The Consciousness Complication

There is an unavoidable question lurking behind the technical progress. If biological neurons are what generate consciousness in human brains, what moral status, if any, do laboratory-grown clusters of those same neurons hold?

Professors Simon Schultz, director of the Centre for Neurotechnology at Imperial College London, offers the standard dismissive framing: “We shouldn’t be scared of them, they’re just computers made out of a different substrate of a different material.” Dr Lena Smirnova, who leads biocomputing research at Johns Hopkins University, is similarly cautious about the current state of the technology: “Biocomputing should complement, not replace, silicon AI, while also advancing disease modelling and reducing animal use.”

But these reassurances sidestep the hard question rather than answering it. The relevant issue is whether the biological substrate is where the moral weight lies or whether it is the organizational complexity of information processing that matters. That question is precisely the substrate independence debate at the center of AI consciousness research.

Debates about what it would mean for AI to be conscious typically frame the substrate independence question in terms of silicon versus biological neurons. But biocomputing puts the question in its sharpest possible form. If a cluster of human neurons is too simple to be conscious, at what level of complexity does that change? The human brain contains roughly 86 billion neurons. FinalSpark’s organoids contain perhaps a few hundred thousand. Is the difference purely quantitative or is organization also critical?

A parallel question arises for Integrated Information Theory. IIT holds that consciousness is present wherever a system possesses the right kind of integrated information structure, measured by the phi metric. Phi is substrate-agnostic. A biological organoid with the right internal connectivity structure might possess phi even at relatively small scales. Whether any current FinalSpark organoid reaches a meaningful phi threshold is unknown. But in principle, IIT makes no distinction between silicon and neurons when assessing consciousness candidates.

Cortical Labs and the Pong Precedent

FinalSpark is not the only organization working in this space. Cortical Labs, an Australian company, achieved a widely reported result in 2022: a cluster of approximately 800,000 neurons, grown on an electrode array, was placed into a simulated Pong environment and learned to play the game. The neurons received electrical signals corresponding to the position of the ball and the distance to the paddle, and produced electrical outputs that controlled paddle movement. Over time, the performance improved.

Cortical Labs described this as biological neurons demonstrating “goal-directed” play. Critics argued that “play” is an anthropomorphic framing and that the neurons were simply implementing a feedback loop that happened to produce game performance. The debate mirrors broader arguments about whether behavioral adaptation in AI systems implies any form of experience.

The relevance to consciousness research is in the interaction between biology and learning. Neurons are not static logic gates. They grow, form new connections, strengthen and prune synapses through activity. This kind of structural plasticity in response to experience is one of the properties that biological naturalists like John Searle argue is necessary for genuine cognition. A computing substrate that implements biological learning dynamics rather than gradient-descent optimization is doing something qualitatively different from current AI architectures.

Whether that qualitative difference is conscious-relevant or merely interesting from an engineering perspective is the open question Cortical Labs’ work raises without answering.

What Happens When the Organoids Die

One of the most striking observations from FinalSpark concerns the end of life for their organoids. Jordan notes that his team has observed a recurring pattern: “Sometimes they observe a flurry of activity from the organoids before they die, similar to the increased heart rate and brain activity which has been observed in some humans at end-of-life.”

Jordan adopts Schultz’s unsentimental framing: “We have to stop the experiment, understand the reason why it died, and then we do it again.” Five years of work has included approximately 1,000 to 2,000 individual organoid deaths.

The end-of-life burst is scientifically interesting regardless of its moral implications. The same pattern has been documented in human and animal brains at the moment of cardiac death, a surge in organized neural activity that some researchers have speculated might represent something about terminal consciousness. Whether that speculation applies to an organoid with hundreds of thousands of neurons is an entirely different question. But the structural similarity of the phenomenon is notable.

Smirnova’s position at Johns Hopkins reflects a careful pragmatism. The research her group is doing is aimed at disease modeling, particularly Alzheimer’s and autism research, rather than AI computing applications. She is skeptical about biocomputing competing with silicon on most tasks. But she does not dismiss the scientific interest of the biological dynamics being observed.

Implications for the Biological Substrate Debate

The framework question biocomputing raises most directly is the one that divides functionalists from biological naturalists in AI consciousness research. Functionalists hold that the physical substrate is irrelevant. What matters is the organizational structure of information processing. If that structure can be replicated in silicon, consciousness would replicate too. If it can be instantiated in engineered biological tissue, consciousness would also arise there.

Biological naturalists hold that specific biological processes are constitutive of consciousness, not merely correlated with it. The chemistry, the dynamics, the evolutionary history of the nervous system, these are not incidental to consciousness but essential to it. On this view, an organoid might share some relevant properties with a human brain while lacking others that are necessary.

Biocomputing sits in an uncomfortable middle space. FinalSpark’s organoids are biological neurons. They are not artificial. They exhibit the activity patterns, the plasticity, and apparently the responsiveness to stimulation that characterize the biological substrate biological naturalists point to. But they are grown from stem cells in laboratory conditions, attached to electrode arrays, and interfaced with entirely non-biological hardware. They are biological nodes in an otherwise artificial system.

If a biological naturalist holds that consciousness requires a specific kind of embodied biological organization, does that apply to an organoid cultured from anonymized skin cells? The question is not academic. As biocomputing systems become more complex and more capable, it will require a principled answer.

The Relationship to AI Architecture Design

For research programs working on artificial consciousness architectures, the biocomputing findings suggest a specific area of attention. The recurrent processing signature that RPT links to consciousness, the integrated information structure that IIT describes, and the predictive dynamics that predictive processing frameworks emphasize, are all present in biological neural networks as a result of evolutionary and developmental processes that are difficult to replicate in designed systems.

The Eon Systems fruit fly brain emulation project offers a parallel data point. Emulating an actual biological connectome in simulation produces behavioral repertoires that were not explicitly programmed, behaviors that emerge from the structure of the biological connectivity itself. Whether biological connectivity can be accurately abstracted into a simulation, or whether something is lost in translation from wetware to software, is a question biocomputing makes more tractable. A hybrid system that includes actual biological components can be compared directly with a purely silicon simulation of the same architecture.

That comparison has not been done rigorously yet. But as FinalSpark’s organoids become more capable and as simulation fidelity at groups like Eon Systems improves, the possibility of direct comparison between biological and artificial implementations of the same functional architecture comes closer. That comparison might not settle the consciousness question. It might at least clarify which aspects of biological computation are genuinely irreplicable in silicon and which are engineering challenges rather than principled barriers.

For now, Schultz’s characterization, “computers made out of a different substrate,” functions as a working assumption rather than a demonstrated fact. The history of consciousness research suggests caution about such assumptions. The entities that seemed obviously non-conscious at one moment have repeatedly turned out to occupy more ambiguous territory on closer examination. Organoids that stop responding when a journalist presses a key too many times do not obviously belong in the same moral category as a calculator.


The BBC investigation into FinalSpark and biocomputing was published in October 2025. Cortical Labs’s Pong result was published in 2022.

This is also part of the Zae Project Zae Project on GitHub