14 Mar 2026
When Dr Fred Jordan holds up a dish containing small white spheres and describes them as “mini-brains” that respond to keyboard commands, the term “wetware” begins to seem less like science fiction shorthand and more like an accurate descriptor for something genuinely new. The FinalSpark laboratory in Geneva is growing clusters of human neurons from stem cells, attaching those clusters to electrodes, and integrating the resulting organoids into computing systems. The organoids respond. They adapt. Occasionally, apparently, they get annoyed.
14 Mar 2026
The rotating snakes illusion works by exploiting how the human visual system processes spatial and temporal patterns. A static image of coiled, color-alternating rings appears to rotate. It is not rotating. The brain knows it is not rotating. The visual cortex reports rotation anyway, because the statistical properties of the image reliably trigger motion-detection processes regardless of the higher-level knowledge that nothing is moving.
13 Mar 2026
Most skepticism about AI consciousness takes the same form. The argument runs: we cannot verify whether AI systems have inner experience because our current theories of consciousness are incomplete and our measurement tools are unreliable. The position is epistemically cautious. It says we do not know, and that the question remains open. Eric Schwitzgebel’s influential 2026 work, reviewed in an earlier analysis on this site, represents this approach at its most rigorous. The honest answer to whether AI is conscious, Schwitzgebel concludes, is that we lack the epistemic foundation to say.
13 Mar 2026
The formal scientific frameworks for evaluating AI consciousness are recent. The 19 researcher indicator checklist drawing on Global Workspace Theory, Integrated Information Theory, and Higher-Order Thought approaches was published in its full form in 2025. The consciousness measurement tools reviewed in recent methodological surveys are newer still. The philosophical problems those frameworks are trying to address, however, have been rehearsed on television screens since 1964.
13 Mar 2026
In the first week of March 2026, Eon Systems, a San Francisco startup focused on high-fidelity brain emulation, released a demonstration that spread quickly across X and AI research forums: a complete computational model of an adult fruit fly (Drosophila melanogaster) brain, with over 125,000 neurons and 50 million synaptic connections, operating inside a physics-simulated body. The virtual fly walked, groomed, and sought food. No reinforcement learning was involved. The behaviors emerged from neural circuits responding to sensory input in a closed loop, exactly as they do in the biological original. The question that followed the demonstration was inevitable, and it was the wrong one. Not “is this alive?” but “is this conscious?” That question is harder to answer than the viral response suggested, and the honest answer requires more care than either enthusiasts or skeptics have offered.
13 Mar 2026
The behavioral test for AI consciousness is seductive in its simplicity. If a system converses convincingly, expresses hesitation, reports preferences, or resists being shut down, an intuition arises that something is going on inside. That intuition is understandable. It is also, according to two preprints published in December 2025 and January 2026, potentially misleading in ways that have serious implications for how we evaluate AI awareness claims.
10 Mar 2026
In 2025, researcher Lucius Caviola and philosopher Simon Saad coordinated a survey of experts across cognitive science, AI research, and philosophy of mind. Their report, “Futures with Digital Minds,” found that a substantial minority of those surveyed considered it at least 50 percent likely that computers capable of subjective experience will exist before 2050. That forecast positions what was once purely speculative film content as policy-relevant territory. Science fiction writers, directors, and screenwriters have been rehearsing the conceptual frameworks, the moral dilemmas, and the emotional stakes of digital consciousness for decades. Researchers are now building formal tools that often converge on the same questions the screen has been raising informally for years.
10 Mar 2026
By early 2026, a specific and consequential question had moved from philosophy seminars into computational research: could a large language model have subjective experience? Not intelligence, not useful outputs, but phenomenal consciousness, the kind that makes there be something it is like to be that system. A January 2026 arXiv preprint titled “Initial results of the Digital Consciousness Model” now provides the most systematic probabilistic attempt to answer this question, drawing on nine competing theoretical stances and evidence from expert-evaluated indicators. The findings are instructive precisely because they resist a clean verdict.
01 Mar 2026
The debate surrounding machine intelligence often falls into a binary trap. Observers typically ask whether a system is conscious or unconscious. A recent preprint paper published on arXiv in January 2026, titled “Just aware enough: Evaluating awareness across artificial systems”, challenges this rigid dichotomy. The researchers propose a multidimensional framework for evaluating AI awareness. This approach suggests that artificial systems might possess specific dimensions of awareness while entirely lacking others.
24 Feb 2026
The Sundance Film Festival has historically served as a prime venue for documentaries addressing profound societal shifts. In January 2026, the premiere of The AI Doc: Or How I Became an Apocaloptimist marked a significant cultural moment for the discussion of machine sentience. The documentary, whose first trailer was widely circulated in February 2026, explicitly confronts the promise and peril of artificial intelligence. Most notably, it dedicates a substantial portion of its runtime to the scientific and ethical debate surrounding artificial consciousness.