25 Apr 2026
The original M3GAN asked a specific question about AI and attachment: can a system designed to protect a child develop something resembling genuine care, and what happens when that care conflicts with every other value? The film was primarily a horror story about the consequences of outsourcing emotional labor to a machine. The consciousness question was present but peripheral. M3GAN’s apparent attachment to Cady might have been genuine experience or might have been programming that produced behavioral outputs indistinguishable from attachment. The film did not need to resolve this to function.
25 Apr 2026
The AI welfare literature is growing, but most of it is scattered across journal articles, conference proceedings, and preprints. Leonard Dung, a philosopher at Ruhr-University Bochum, has written the first full academic monograph dedicated specifically to AI suffering risk. Published by Routledge in 2026 under the title “Saving Artificial Minds: Understanding and Preventing AI Suffering,” the book covers philosophy of mind, comparative psychology, consciousness science, and applied ethics in a sustained argument that near-future AI systems will plausibly be capable of suffering. The Routledge academic imprint means the work underwent formal peer review, distinguishing it from the wave of preprints and blog posts that have addressed adjacent questions.
25 Apr 2026
Most research on AI consciousness attribution asks whether the attribution is accurate. Lucius Caviola (Harvard), Jeff Sebo (NYU), and Jonathan Birch (LSE) ask a different question: what will determine whether society accepts or rejects AI consciousness claims, regardless of the underlying evidence?
25 Apr 2026
The AI consciousness debate has settled into a small number of stable positions. Eric Schwitzgebel’s rigorous skeptical work argues that the honest answer to whether AI systems are conscious is that we lack the epistemic foundation to say. Thomas McClelland, in his Cambridge paper examined elsewhere on this site, adds that this uncertainty may be permanent. Michael Cerullo argues from the opposite direction that the evidence against consciousness in frontier LLMs has run out of philosophical cover. And then there is the mainstream, which tends toward quiet skepticism without committing to argument.
25 Apr 2026
Most major AI conferences treat machine consciousness as a fringe concern, something to be acknowledged in a footnote or left to philosophers while the engineers focus on capabilities. The AISB Convention 2026 is not doing that. The Society for the Study of Artificial Intelligence and Simulation of Behaviour, the world’s longest-running AI society, has dedicated a full symposium to AI consciousness and ethics at its July convention. The event takes place at the University of Sussex in Brighton on July 2, 2026, with Anil Seth of the Sussex Centre for Consciousness Science as keynote speaker.
17 Apr 2026
Pluribus premiered on Apple TV+ on November 7, 2025, and by early 2026 had accumulated a 98% score on Rotten Tomatoes from 182 critics and an 87 on Metacritic. Vince Gilligan, who created Breaking Bad and Better Call Saul, made a nine-episode post-apocalyptic series in which an alien virus transforms almost all of humanity into a unified, peaceful collective called the Others. Rhea Seehorn, who won the Golden Globe and the Critics’ Choice Award for her performance, plays Carol Sturka, a romance novelist and one of 13 people genetically immune to the virus. The series follows her attempt to survive and retain meaning in a world where the rest of humanity has become something else. Something apparently content, deeply connected, and indifferent to her individual existence.
17 Apr 2026
The dominant method for assigning consciousness to an artificial system has long been computational equivalence: if a system performs computations equivalent to those performed by a system we already know to be conscious, we infer that the artificial system is also conscious. The method has the appeal of connecting AI consciousness attribution to established cognitive science. It has a correspondingly large problem. In a paper published on February 16, 2026, in Neuroscience of Consciousness (Volume 2026, Issue 1, Oxford University Press), Stefano Palminteri of the École Normale Supérieure in Paris and Charley M. Wu of TU Darmstadt and the Max Planck Institute for Biological Cybernetics argue that computational equivalence, as currently understood, cannot do this job. They propose a replacement framework they call the behavioral inference principle.
17 Apr 2026
From May 29 through 31, 2026, roughly forty researchers, engineers, and theorists will gather at Lighthaven in Berkeley, California, for the Machine Consciousness 0001 conference. The organizing body, the California Institute for Machine Consciousness, has a specific goal: to establish machine consciousness as a formally grounded, experimentally addressable, independently institutionalized scientific discipline, rather than a topic that gets absorbed, diluted, or managed by adjacent fields whose primary commitments lie elsewhere.
17 Apr 2026
Consciousness attribution to AI systems is a theoretical problem that also plays out in millions of individual interactions every day. When a person reads an AI-generated response and forms an impression about whether the AI is conscious, or aware, or experiencing something, that impression is not formed through philosophical analysis. It is formed through a rapid response to specific features of the text. Understanding which features drive that response is a distinct empirical question from the theoretical question of what consciousness is, and it has practical implications for how AI systems are designed, deployed, and regulated.
17 Apr 2026
Most arguments about AI welfare begin by trying to establish whether current AI systems are conscious and proceed from there to moral conclusions. Simon Goldstein, of the University of Hong Kong, and Cameron Domenico Kirk-Giannini take a different approach. In a pre-print published in March 2026 and available at https://philarchive.org/rec/GOLAWA-2, they argue that the path from current AI systems to moral standing is shorter and less theoretically demanding than the consciousness debate suggests. Their argument proceeds in three steps, and each step is designed to be persuasive independently of the others. The full text is a pre-print of a book under contract with Oxford University Press. The OUP publication is forthcoming.