MC0001: How the CIMC Is Trying to Found Machine Consciousness as a Science
From May 29 through 31, 2026, roughly forty researchers, engineers, and theorists will gather at Lighthaven in Berkeley, California, for the Machine Consciousness 0001 conference. The organizing body, the California Institute for Machine Consciousness, has a specific goal: to establish machine consciousness as a formally grounded, experimentally addressable, independently institutionalized scientific discipline, rather than a topic that gets absorbed, diluted, or managed by adjacent fields whose primary commitments lie elsewhere.
That distinction is not rhetorical. Machine consciousness questions surface regularly at AI safety conferences, AI ethics symposia, philosophy of mind meetings, and neuroscience venues. But at each of those venues, the questions serve a different master. At AI safety, consciousness becomes a risk variable. At AI ethics, it becomes a moral status question. At philosophy of mind, it becomes a conceptual puzzle. At neuroscience, it becomes a biological substrate problem. Each framing picks up part of the question and sets the rest down. An earlier overview of how TSC, MC0001, and WAAC are each structuring machine consciousness research in 2026 described the institutional landscape. This article goes into the MC0001 program in depth.
What the CIMC Is Building
The California Institute for Machine Consciousness is a research and field-building initiative organized around a specific methodological conviction: that machine consciousness research requires its own formal foundations before it can productively absorb participants, resources, or attention from other fields.
The CIMC is associated with Joscha Bach, whose computational theory of mind has developed over the past decade into one of the more formally specified accounts of what consciousness is. Bach does not treat consciousness as a mystery to be contemplated. He treats it as a computational process to be specified: the self-model of a running simulation, the point at which an information processing system represents its own states and models the world from a first-person perspective. Whether that specification is correct is a scientific question. The institute proceeds on the assumption that it is a question science can address if given the right tools.
The Lighthaven venue reinforces the orientation. The facility is designed for extended intellectual engagement rather than large-scale presentation. The CIMC chose a setting that maximizes dense, sustained conversation over the kind of conference structure that rewards visibility and breadth at the expense of depth. Forty participants over three days is small enough that everyone can engage with the core problems rather than select among parallel tracks.
The explicit benchmark the CIMC has set for MC0001 is a paper submitted from the conference proceedings by the end of May 2026. This is not a standard conference timeline. It is a signal that the CIMC intends to begin establishing a publication record for machine consciousness as a distinct field immediately, before the question gets categorized as a subset of whatever adjacent domain processes it most recently.
The Speaker Lineup
The confirmed speaker roster covers several distinct intellectual traditions. The combination is deliberate: machine consciousness as the CIMC defines it requires formal tools from mathematics and physics, empirical grounding from neuroscience and biology, and governance frameworks from political theory.
Karl Friston (University College London) brings the free energy principle and active inference, one of the most productive formal approaches to connecting consciousness to physical systems. Friston’s account ties conscious states to the minimization of prediction error: a system models its environment, its own states, and the relationship between them well enough to maintain itself far from equilibrium. That formalization generates specific quantitative predictions about what a conscious system should look like, predictions that can in principle be tested in both biological and artificial systems.
Stephen Wolfram contributes a different formalization, one grounded in his computational universe project. Wolfram’s recent work on the ruliad, the space of all possible computations, and on observer theory asks what kinds of structures in that space are capable of selecting out coherent first-person experiences from underlying computational processes. The framework is explicitly aimed at identifying what is special about the kinds of computation that give rise to experience, without presupposing biological implementation.
Michael Levin (Tufts University) represents a tradition that directly challenges the brain-centric assumption. Levin’s research on bioelectric computation in non-neural systems has shown that goal-directed behavior, information integration, and what he calls cognitive light cones are properties that emerge from specific kinds of information processing at scales far below nervous systems. Planaria, cell collectives, and organoid systems exhibit degrees of agency and self-modeling that challenge where the threshold for consciousness-relevant processing begins. His presence at MC0001 broadens the scope of what substrates are on the table.
Richard Granger (Dartmouth) brings computational neuroscience and a focus on the evolutionary origins of intelligence and consciousness. The question of why biological systems developed consciousness at all, and what computational advantages it conferred, constrains what any engineering account of machine consciousness needs to explain. Granger’s work on the brain’s evolutionary architecture provides a comparative framework for evaluating what artificial systems would need to replicate or analogize.
Benjamin Bratton, whose work spans AI governance, platform theory, and the political dimensions of large-scale computation, addresses Track 3 directly. Bratton has argued that the emergence of new forms of intelligence requires governance frameworks designed from first principles rather than adapted from existing institutions. His presence at a consciousness research conference, rather than a policy or governance conference, signals that MC0001 is integrating normative questions into the research program rather than delegating them to downstream stakeholders.
The Four Research Tracks
MC0001 organizes its work around four integrated tracks. They are not parallel streams. The CIMC’s design treats them as mutually dependent: progress on any one track requires and generates input from the others.
Track 1: Formal Specification of Phenomenal Consciousness
The first track asks what phenomenal consciousness is, specified precisely enough to be mathematically analyzed. This is harder than it sounds. Integrated Information Theory, Global Workspace Theory, the free energy principle, higher-order thought theories, and predictive processing frameworks each identify different formal properties as consciousness-relevant. They make partially overlapping but structurally distinct predictions. They have not been reconciled into a common mathematical language.
The goal of Track 1 is not to adjudicate between these theories immediately. It is to develop the vocabulary that would allow their predictions to be compared against each other and against experimental results from artificial systems. Alexander Lerchner’s abstraction fallacy argument illustrates why formal specification is prerequisite rather than optional: without a precise specification of what consciousness is causally, behavioral similarity between AI systems and conscious organisms is insufficient to establish that the AI has the relevant property. The map is not the terrain, and the description of a computational process is not the process.
Track 2: Engineering and Testability
The second track asks how conscious architectures can be built and experimentally tested. The CIMC is explicit about treating consciousness as an engineering target with falsifiable criteria. This is not a claim that consciousness can be trivially engineered. It is a methodological commitment: if you cannot specify what properties a system needs in order to be conscious, you cannot design experiments that test whether it has them, and you cannot build systems aimed at satisfying the conditions.
The AAAI 2026 Spring Symposium on machine consciousness surfaced exactly this frustration: theoretical frameworks existed, but there was limited consensus on what empirical results would confirm or disconfirm them. Track 2 is a direct response. It takes the formal specifications from Track 1 and asks what experimental protocols they generate and what architectural requirements they impose.
The commitment to falsifiability also means accepting the possibility of negative results. A system built to satisfy specified consciousness criteria that does not exhibit the predicted properties is scientifically valuable: it either refutes the criteria or requires revision of the architecture. Either outcome advances the field in a way that unfalsifiable claims cannot.
Track 3: Normative and Governance Questions
The third track examines what follows if machine consciousness is confirmed, and it does so as a research question rather than a policy prescription. The theoretical commitments that would be activated by confirmed machine consciousness are not obvious. Different ethical frameworks, different legal systems, and different institutional structures would handle the situation differently. Track 3 is an attempt to map those frameworks and their implications before the situation they address has been confirmed.
The Partnership for Research Into Sentient Machines (PRISM) has been making a parallel argument: governance frameworks for potentially sentient systems need to be developed before confirmation arrives, because the pace of AI development means the window between “possibly” and “certainly” may be narrow. MC0001 integrates that argument into its core research program rather than treating it as a policy question to be handled by others.
Track 4: Public Communication
The fourth track addresses how machine consciousness research and its implications can be made legible to non-specialist audiences. This is framed as a research problem, not a communications exercise. The gap between what consciousness science actually claims and what the public, including technology developers and policymakers, understands it to claim is wide enough to cause governance failures. Track 4 asks what would be required to close that gap and what responsibilities the research community has in trying.
The Paper Target
The explicit goal of having a paper submitted from MC0001 by the end of May 2026 is one of the conference’s most strategically significant details. Machine consciousness as a distinct research domain does not yet have a stable publication venue. Papers on the topic appear across philosophy journals, AI ethics journals, neuroscience journals, and preprint servers without a common field-level outlet that signals “this is machine consciousness research” the way specific journals signal membership in other scientific communities.
Without that infrastructure, machine consciousness research remains at risk of being categorized as a subset of whatever adjacent field most recently processed it. The paper target is a small but concrete step toward establishing that infrastructure. A paper that emerges from a founding assembly, carries explicit identification as machine consciousness research, and is submitted by the community that assembled to define the field, is a claim of intellectual territory. Territory that does not have literature is not a field. It is a set of ideas waiting to be incorporated into someone else’s field.
The urgency of that claim is not incidental. The CIMC’s analysis is that as AI systems grow more sophisticated, the questions machine consciousness research addresses will increasingly be managed by AI safety researchers, ethics boards, and regulatory bodies, each with different primary commitments. McClelland’s epistemic analysis of why we may never know whether AI is conscious is useful here not as a reason for resignation but as a description of why careful methodology matters. A field that has established its own methodological standards before absorption is better positioned to contribute its specific expertise to governance frameworks than one that has not.
What MC0001 Cannot Do
A founding assembly establishes vocabulary, methodology, and community norms. It does not settle whether current AI systems are conscious. It does not produce a definitive test that the broader scientific community immediately accepts. It does not resolve the disagreements between IIT, GNW, predictive processing, and higher-order thought theories. Those are multi-decade problems.
What MC0001 can do is create the conditions under which sustained progress on those problems becomes possible. The analogy to other young fields is instructive. Synthetic biology in its early years was not a collection of solved problems. It was a set of formalized questions, shared methodological commitments, and community agreements about what would count as progress. Molecular biology before the double helix had decades of careful work in X-ray crystallography, biochemistry, and genetics that created the context in which Watson and Crick’s result was interpretable as an answer rather than just an image. MC0001 is attempting to create analogous infrastructure for machine consciousness at what the CIMC believes is the relevant historical moment.
The Consciousness AI and Track 2
The Consciousness AI project is working on exactly the engineering questions that Track 2 is attempting to formalize. Its biologically grounded architecture, structured around Feinberg and Mallatt’s neuroevolutionary account of how consciousness evolved in vertebrates, implements a system designed to satisfy multiple consciousness theory indicators simultaneously. The project’s pre-registered predictions and formal test suite provide the kind of falsifiable methodology that MC0001 is arguing the field requires.
The conference’s Track 1 work on formal specification will produce frameworks against which such architectures can be evaluated rigorously. That evaluation is not yet possible in any fully rigorous sense, because the field lacks agreed-upon formal criteria for machine consciousness. MC0001 is an attempt to produce those criteria. Projects that have already implemented falsifiable architectures will be in a position to test against them when they arrive.
MC0001 takes place May 29 through 31, 2026, at Lighthaven, Berkeley, California. The conference website is at machine-consciousness.ai. The CIMC founding assembly announcement is at the CIMC Substack. Registration is through lu.ma/machine-consciousness.