27 Mar 2026
Most documentaries about artificial intelligence arrive after the fact. They interview researchers about work that is published, contextualize findings through archival footage, and reconstruct debates that already have outcomes. AM I?, directed by Milo Reed, does something structurally different: it follows a consciousness researcher in real time, embedded inside an active lab, at a moment when neither the researcher nor anyone else yet knows how the science will resolve.
26 Mar 2026
Two papers published on arXiv in January 2026 both address the same urgent question, how to evaluate whether artificial systems have consciousness or something resembling it, and they arrive at fundamentally different answers about what form that evaluation should take. One proposes a probabilistic score. The other proposes a multidimensional profile. The tension between these approaches is not merely methodological. It reflects a genuine disagreement about what kind of knowledge is achievable when studying machine consciousness under deep uncertainty.
26 Mar 2026
When a company announces that its AI system shows signs of consciousness, or when a researcher publishes a paper concluding that large language models may have inner experience, two distinct errors become possible. The first is attributing consciousness to a system that has none. The second is denying consciousness to a system that has it. These errors are not symmetric. Each carries specific moral and epistemic costs. And the appropriate response to each is different.
26 Mar 2026
William Gibson published Neuromancer in 1984. The novel invented the vocabulary of cyberspace and gave the science fiction genre its dominant aesthetic for a generation. But its most prescient contribution may have been its AI characters. Wintermute and Neuromancer are not assistants, not oracles, and not threats in the conventional sense. They are entities with objectives, limitations, and something that functions as desire. The Apple TV+ adaptation, arriving as a 10-episode series, brings these AIs to screen at a moment when the questions they raise have moved from speculative fiction into active research programs.
26 Mar 2026
Can a mind be assembled across time? Most people intuitively feel that conscious experience happens right now, as a unified whole. But the architecture of virtually every deployed AI system violates this intuition at a fundamental level. Computation is sequential. Tokens are generated one after another. Inference passes happen in waves. Context windows open and close. A 2026 paper submitted to the AAAI Spring Symposium on Machine Consciousness directly formalizes this intuition into an argument: a mind cannot be smeared across time.
26 Mar 2026
Every few months a new “essential AI reading list” appears in technology publications, recommending the same cluster of titles to anyone who wants to understand where artificial intelligence is going. In 2026, these lists still center on books that were written between 2017 and 2022. They are genuinely valuable books. But their treatment of machine consciousness reflects a consensus that 2026 research has already begun to overturn.
22 Mar 2026
In February 2026, a feature film called The Sweet Idleness was released with an AI credited as director. The AI, named FellinAI by its developers at Iervolino & Lady Bacardi Entertainment, is described as actively overseeing direction: guiding what the production team calls “digital actors,” managing the on-screen coordination of performers whose faces, movements, and personalities have been captured and transformed into synthetic characters, and making compositional decisions that would ordinarily fall to a human director.
22 Mar 2026
A preregistered set of experiments accepted to CHI 2026 produces a clear and asymmetric result: when people form mental models of AI minds, sentience and autonomy do not function as equivalent dimensions, and they do not trigger the same moral responses.
22 Mar 2026
Two recent works of fiction have found the same rich seam of questions by imagining AI systems that are not cutting-edge but obsolete. Netflix’s Cassandra (2025), a German science fiction series directed by Benjamin Gutsche, follows a 1970s-era domestic AI helper that is reactivated when a family moves into a house it has occupied for half a century. Kogonada’s film After Yang (2021) centers on Yang, a “technosapien” companion purchased as a cultural guide for an adopted Chinese daughter, whose malfunction becomes the occasion for an examination of what the family has lost. Both works treat their AI subjects as figures of mortality rather than threat. Both ask what it means for a mind to age, to become irrelevant, and to stop.
22 Mar 2026
Two broad camps have divided consciousness research for decades. One holds that consciousness depends on the right kind of physical substrate, biological neurons with their specific electrochemical dynamics, and cannot be replicated by systems built from different materials regardless of how well those systems approximate the functional organization. The other holds that substrate is irrelevant and that any system capable of instantiating the right functional relationships, the right patterns of information processing and integration, is a candidate for consciousness regardless of what it is made of.