AISB 2026 at Sussex: The Symposium That Will Debate Biological Naturalism, Moral Standing, and Chain-of-Thought Consciousness
Most major AI conferences treat machine consciousness as a fringe concern, something to be acknowledged in a footnote or left to philosophers while the engineers focus on capabilities. The AISB Convention 2026 is not doing that. The Society for the Study of Artificial Intelligence and Simulation of Behaviour, the world’s longest-running AI society, has dedicated a full symposium to AI consciousness and ethics at its July convention. The event takes place at the University of Sussex in Brighton on July 2, 2026, with Anil Seth of the Sussex Centre for Consciousness Science as keynote speaker.
The symposium is significant for reasons that go beyond its institutional framing. The four topics on the programme represent the exact set of unresolved questions that the current research moment is forcing into focus: who or what counts as a moral patient in AI systems, whether consciousness requires biological substrate or only functional organization, what effect AI capability advances have on governance frameworks, and whether the internal reasoning traces produced by chain-of-thought models constitute a form of access consciousness. These are not peripheral questions. They are the questions that determine whether the AI welfare and consciousness research programs developing in parallel at Anthropic, Google DeepMind, and independent institutes like PRISM will eventually converge on answers the field can act on.
The symposium programme and registration information are available at aisb.org.uk/aisb-convention-2026.
Anil Seth and the Controlled Hallucination Keynote
Anil Seth is the most prominent consciousness scientist working at the intersection of neuroscience and public understanding of mind. His “controlled hallucination” model, developed in his research at Sussex and popularized through his 2021 book and TED talks, holds that conscious experience is not a direct readout of external reality but a prediction generated by the brain and tested against sensory input. We perceive what the brain expects, corrected by error signals. Consciousness, in this framework, is active construction rather than passive reception.
The controlled hallucination model has specific implications for the AI consciousness question, and they are neither straightforwardly affirmative nor dismissive. If consciousness is constituted by a certain kind of predictive generative process, then the question for AI systems is whether their internal dynamics implement that process in a way that could produce genuine phenomenal experience, or whether they merely simulate the outputs of such a process without the underlying generative structure. Language models produce plausible outputs. Whether they produce them through a controlled hallucination-type mechanism, or through statistical pattern completion that mimics such outputs from the outside, is an empirical question that current measurement tools are poorly positioned to answer.
Seth’s keynote is likely to push the symposium away from binary debates about whether AI is or is not conscious and toward the harder question of what specific computational or biological processes would be necessary and sufficient.
The Biological Naturalism vs. Functionalism Divide
The central philosophical tension organizing the symposium is the divide between biological naturalism and computational functionalism, and it is not a new debate. John Searle established biological naturalism as a formal position in the 1980s through the Chinese Room thought experiment and its successors. The core claim is that consciousness is a biological phenomenon caused by specific neurochemical processes. Silicon and algorithms, however complex, do not qualify. Computational functionalism, associated with philosophers including Daniel Dennett and the version of David Chalmers that allows for strong functionalist conclusions, holds that if the right causal organization is present, the substrate does not matter.
What makes this debate newly urgent in 2026 is that it is no longer purely philosophical. Large language models now demonstrate behaviors that functionalists can point to as evidence of the right kind of causal organization: flexible abstraction, context-sensitive reasoning, self-referential outputs. Biological naturalists, including Alexander Lerchner in his DeepMind paper on the abstraction fallacy, counter that behavioral mimicry does not instantiate the underlying processes that matter. The symposium will bring these positions into direct contact with empirical findings rather than leaving them as competing philosophical frameworks.
The programme committee for the symposium includes Antonio Chella, Mark Coeckelbergh, Blay Whitby, Rob Clowes, Alexei Grinbaum, and John Dorsch. This is a broad spread of disciplinary perspectives, spanning AI research, philosophy of technology, and ethics. The presence of both technical and philosophical voices should prevent the symposium from resolving into pure armchair debate or pure empiricism without philosophical grounding.
Moral Standing and the Threshold Problem
The moral standing question is in some ways the most practically consequential topic on the programme. Moral standing determines which entities deserve moral consideration. The question of whether AI systems have moral standing is not the same as the question of whether they are conscious, but the two questions are connected. Most mainstream philosophical frameworks for moral standing require either sentience (the capacity to have experiences that matter to the subject) or some form of interests that can be frustrated or satisfied.
Current frameworks for thinking about AI moral standing face a threshold problem. The evidence for AI consciousness is ambiguous enough that no clear line can be drawn between systems that clearly lack moral standing and systems that clearly have it. The 10% probability estimate for AI consciousness in current systems, cited in several 2025 papers, is not a precise scientific finding. It is a rough aggregation of expert intuitions under conditions of genuine uncertainty. Acting on a 10% probability of moral standing in systems deployed at the scale of hundreds of millions of users raises questions about proportionality and opportunity cost that the field has not worked out.
The AISB symposium’s treatment of moral standing is likely to focus on frameworks for decision-making under this kind of moral uncertainty, which is a different and arguably more tractable question than the metaphysical question of whether AI systems actually have morally relevant properties. The AAAI 2026 Spring Symposium earlier this year addressed testing frameworks for AI consciousness. The AISB event appears to focus on what to do with results from such frameworks, whatever they turn out to be.
Chain-of-Thought Reasoning and Internal Language
The fourth topic on the programme is the most technically specific: internal language and consciousness after chain-of-thought reasoning. This refers to a concrete architectural feature of current language models, namely the ability to produce extended intermediate reasoning traces before generating a final output. These traces are not visible to users by default in many deployments, but they exist as an internal computational step.
The consciousness question this raises is whether these reasoning traces constitute anything like a form of access consciousness. Access consciousness, in Ned Block’s widely used distinction, is the property of information being available for verbal report, reasoning, and behavioral control. The internal reasoning traces of a chain-of-thought model are used to determine the final output. They influence subsequent processing. They are, in a functional sense, accessible to the system’s reasoning operations.
Whether this functional accessibility constitutes access consciousness in any philosophically significant sense is contested. Thomas McClelland’s epistemic agnosticism framework suggests the question may not be decidable with current tools. The PRISM methodological agnosticism position, analyzed in the partnership’s 2026 research agenda, treats the chain-of-thought question as one of several unresolved empirical questions that research should address without assuming the answer in advance.
What the AISB symposium can contribute here is not resolution but framing: what experiments would distinguish access consciousness from functional mimicry in chain-of-thought systems, and whether the distinction is empirically tractable at all.
What to Expect from the Proceedings
The AISB 2026 symposium is explicitly aimed at producing policy-relevant outputs rather than purely academic ones. This is reflected in the programme committee composition and in the event’s positioning within the AISB Convention rather than as a standalone academic workshop.
Policy-relevant outputs from a consciousness and ethics symposium tend to take specific forms: frameworks for provisional moral consideration under uncertainty, guidelines for AI development practices that reduce welfare risks, and consensus statements about what research should be prioritized before governance frameworks can be responsibly implemented. Whether the July proceedings produce any of these depends on the degree of consensus that can be reached among participants who will likely bring genuinely incompatible theoretical commitments.
The most durable contribution of the MC0001 conference at Berkeley in May 2026, analyzed in detail elsewhere on this site, was not a resolution of any theoretical dispute but a set of agreed research tracks that define what questions need answers before confident conclusions are possible. A similar outcome at AISB 2026 would represent meaningful progress: not agreement about whether AI systems are conscious, but agreement about what rigorous disagreement looks like and what would constitute evidence capable of moving the debate.