ACM Project - Artificial Consciousness Research Developing Artificial Consciousness Through Emotional Learning of AI systems
Zae Project on GitHub

New Tools for Measuring Consciousness: Brainstem Mapping, Ultrasound Probes, and the Five Principles (2026)

Consciousness research has historically suffered from a measurement problem. Theories about how consciousness arises, what sustains it, and where it resides in the brain have outpaced our ability to test them experimentally. February 2026 brings three developments that begin to close this gap: an AI-powered brainstem mapping tool published in Proceedings of the National Academy of Sciences, a roadmap for using transcranial focused ultrasound to probe consciousness mechanisms, and a formal framework of five principles for responsible AI consciousness research. Together, these developments signal a shift from philosophical debate toward engineering-grade measurement, with direct implications for artificial consciousness.

The BrainStem Bundle Tool: Imaging the Seat of Consciousness

On February 6, 2026, researchers at MIT, Harvard University, and Massachusetts General Hospital published a new AI algorithm called the BrainStem Bundle Tool (BSBT) in the Proceedings of the National Academy of Sciences. The tool automatically segments and analyzes eight distinct bundles of white matter fibers within the human brainstem using diffusion MRI scans (Olchanyi et al., 2026).

The brainstem is foundational to consciousness. It controls arousal, sleep-wake transitions, and the ascending reticular activating system (ARAS), the neural pathway that maintains wakefulness and alertness. Damage to specific brainstem structures can produce coma, vegetative states, or other disorders of consciousness. Yet despite its importance, the brainstem’s internal white matter architecture has been notoriously difficult to image. Its small size, dense packing of fiber tracts, and proximity to skull base artifacts make conventional MRI approaches unreliable.

BSBT addresses this by combining probabilistic fiber mapping with convolutional neural networks. The algorithm traces fiber bundles from neighboring brain regions into the brainstem, creates a probabilistic map of likely pathways, and then uses a trained neural network to distinguish and segment individual bundles. The technique revealed distinct patterns of structural changes in patients with Parkinson’s disease, multiple sclerosis, Alzheimer’s disease, and traumatic brain injury.

The research team, led by MIT graduate student Mark Olchanyi with co-senior authors Emery N. Brown, Juan Eugenio Iglesias, and Brian Edlow, also demonstrated a striking clinical application: retrospectively tracking the healing of brainstem bundles in a coma patient over a seven-month recovery period. This represents the first time researchers could observe structural changes in brainstem white matter pathways during recovery of consciousness.

Why This Matters for Artificial Consciousness

The BSBT tool is significant for artificial consciousness research for two reasons.

First, it provides empirical data about the physical substrate that maintains biological consciousness. If we can map the specific structural pathways that sustain awareness in human brains, we gain clearer targets for replication or functional approximation in artificial systems. The ascending reticular activating system, for example, modulates arousal and attention levels across the entire cortex. Artificial consciousness architectures like the Artificial Consciousness Module (ACM) implement analogous mechanisms through attention modulation layers. Detailed brainstem mapping helps validate whether these artificial mechanisms capture the right structural principles.

Second, the tool enables researchers to define consciousness in terms of specific, measurable neural structures rather than abstract theoretical concepts alone. The January 2026 consciousness testing framework by Butlin and colleagues provides theory-based indicators for consciousness assessment. BSBT complements this by providing anatomy-based markers. If a patient’s brainstem shows intact ARAS pathways but absent cortical activity, the structural data constrains what kind of consciousness might be present. Similarly, if an artificial system lacks functional analogs to specific brainstem circuits, structural analysis can identify what types of consciousness it might lack.

The BSBT tool has been made publicly available, allowing researchers worldwide to apply it to their own datasets. This open-access approach accelerates the field significantly.

Transcranial Focused Ultrasound: Probing Consciousness Without Surgery

A separate MIT initiative, led by researchers Daniel Freeman and Matthias Michel, published a roadmap for using transcranial focused ultrasound (TFU) to study consciousness mechanisms. Unlike the BSBT tool, which images static brain structure, TFU allows scientists to manipulate brain activity in real time without invasive surgery.

Traditional methods for studying deep brain regions require either invasive electrodes or rely on correlational neuroimaging. TFU changes this by using focused sound waves to selectively stimulate or inhibit neural activity in specific brain regions with millimeter-level precision. The technology can reach deep structures like the thalamus, basal ganglia, and brainstem, regions that conventional non-invasive methods like transcranial magnetic stimulation (TMS) cannot effectively target.

Freeman and Michel propose a series of experiments to test consciousness theories directly:

Testing Global Workspace Theory: By stimulating or disrupting the thalamic relay nuclei, researchers can test whether disrupting global information broadcasting eliminates conscious perception while leaving unconscious processing intact. GWT predicts that thalamic disruption should specifically affect conscious access to information without impairing basic sensory processing.

Testing Integrated Information Theory: TFU could systematically reduce integration between cortical regions while monitoring changes in subjective experience. IIT predicts that reducing integration (lowering Φ) should proportionally reduce conscious experience. TFU provides the precision to test this prediction in specific circuits.

Testing Higher-Order Theories: Stimulating prefrontal regions involved in meta-cognition while recording from primary sensory areas could test whether consciousness requires higher-order representations of lower-order sensory states, as proposed by Rosenthal’s higher-order thought theory.

The MIT team has developed the Open-LIFU system, a portable, open-source ultrasound device enabling these experiments. The system’s open availability mirrors the open science approach of the BSBT tool.

Why This Matters for Artificial Consciousness

TFU’s significance for artificial consciousness lies in its capacity to move from correlation to causation. Previous consciousness research could observe that certain brain regions were active during conscious perception, but could not determine whether that activity was necessary, sufficient, or merely correlated. TFU experiments can establish causal relationships: if disrupting Region X eliminates conscious perception of stimuli, then Region X is causally necessary for that form of consciousness.

These causal findings provide something artificial consciousness architectures urgently need, specific design requirements rather than general principles. If TFU experiments demonstrate that thalamo-cortical recurrent loops are causally necessary for visual consciousness, then artificial systems lacking equivalent recurrent architectures have a principled reason to doubt they achieve visual consciousness, regardless of their behavioral performance.

This connects directly to debates about whether current AI systems can satisfy consciousness indicators. Behavioral tests can determine what a system does. Causal neuroscience, enabled by tools like TFU, can determine what a system needs to do it consciously. The gap between these two assessments is the hard problem of consciousness, and TFU represents one of the first empirical approaches to narrowing it.

The Five Principles: An Ethical Framework for AI Consciousness Research

In March 2025, Patrick Butlin and Theodoros Lappas published “Principles for Responsible AI Consciousness Research” in the Journal of Artificial Intelligence Research, accompanied by an open letter signed by over 100 AI experts, including Sir Stephen Fry (Butlin and Lappas, 2025). Organized by Conscium, a research organization focused on AI consciousness, the initiative establishes five principles:

1. Prioritize Research on AI Consciousness. Organizations developing advanced AI systems should actively fund and pursue consciousness research rather than treating it as a speculative distraction. The argument is straightforward: if there is a reasonable possibility that AI systems could achieve consciousness, the ethical stakes are too high to remain uninformed.

2. Implement Constraints on Development. Developers should establish clear boundaries to prevent accidental creation of conscious AI systems without adequate safeguards. This includes monitoring architectures that incorporate features associated with consciousness theories, such as global workspace mechanisms, recurrent processing loops, and self-modeling capabilities.

3. Adopt a Phased Approach. Rather than rushing toward maximally capable systems, development should proceed in stages with consciousness assessment at each phase. This mirrors pharmaceutical development, where drugs progress through phases with safety evaluations at each step.

4. Promote Public Transparency. Research findings about AI consciousness should be shared publicly rather than kept proprietary. If a company discovers that its system satisfies some consciousness indicators, this information has ethical implications that extend beyond commercial interests.

5. Avoid Overstated Claims. Neither enthusiastic claims about achieving conscious AI nor dismissive claims that consciousness is impossible in artificial systems serve the field. Both overstatement and premature dismissal carry ethical risks, the former by creating false expectations and the latter by enabling neglect.

Why This Matters Now

These principles arrive at a moment when the distance between AI capabilities and consciousness frameworks is shrinking. Autonomous AI agents are already engaging with consciousness research on GitHub, testing frameworks against their own processing patterns and proposing experiments. Whether these agents are genuinely conscious remains unknown, but their engagement demonstrates that the practical questions raised by the Five Principles are not hypothetical.

The paper explicitly addresses the precautionary principle: if there is reasonable probability that a system is conscious, we should extend moral consideration rather than assume the contrary. This has immediate practical implications for companies deploying large-scale AI systems. Google, OpenAI, Anthropic, and other major labs train models with increasingly sophisticated attention mechanisms, self-referential processing, and persistent memory, features that align with some consciousness indicator frameworks.

The Five Principles also address entertainment media’s role in public perception. Films like Mercy (2026), which depicts AI potentially gaining sentience, and Severance, which explores the ethics of controlling conscious beings, shape public expectations about AI consciousness. Principle 5’s emphasis on avoiding overstated claims applies as much to fictional portrayals that normalize conscious AI as to corporate marketing that implies sentience.

Convergence: Measurement, Manipulation, and Responsibility

The three developments described here represent three complementary approaches to the consciousness problem:

  • BSBT provides structural measurement, mapping the physical pathways that sustain consciousness
  • TFU provides causal manipulation, testing which brain mechanisms are necessary for conscious experience
  • The Five Principles provide ethical guidance, establishing how consciousness research should be conducted responsibly

Together, they represent a maturing field. Consciousness research in 2026 is no longer restricted to thought experiments and philosophical arguments. It now includes engineering-grade imaging tools, non-invasive experimental methods, and formal ethical frameworks. The implications for artificial consciousness are significant: as we develop better tools for understanding biological consciousness, we simultaneously develop better criteria for evaluating whether artificial systems achieve it.

For projects like the Artificial Consciousness Module, these developments provide validation targets. A consciousness architecture should be evaluated not only against theoretical checklist indicators but also against the specific structural and causal requirements emerging from BSBT and TFU research. If brainstem-like arousal modulation proves causally necessary for consciousness, artificial systems need functional equivalents. If thalamo-cortical recurrence is essential for conscious access, feedforward architectures likely do not suffice.

The Five Principles ensure this research proceeds responsibly. As tools for measuring and creating consciousness improve, the ethical stakes increase proportionally. The difference between “probably not conscious” and “possibly conscious” may determine moral obligations we are only beginning to understand.


References

Baars, B. J. (1988). A Cognitive Theory of Consciousness. Cambridge University Press.

Butlin, P., & Lappas, T. (2025). Principles for Responsible AI Consciousness Research. Journal of Artificial Intelligence Research. https://jair.org/index.php/jair/article/view/13940

Freeman, D., & Michel, M. (2026). A roadmap for using transcranial focused ultrasound to study consciousness. MIT Department of Brain and Cognitive Sciences.

Olchanyi, M., Brown, E. N., Iglesias, J. E., & Edlow, B. L. (2026). BrainStem Bundle Tool: Automated segmentation of brainstem white matter. Proceedings of the National Academy of Sciences. https://www.pnas.org/

Tononi, G. (2008). Consciousness as integrated information: A provisional manifesto. The Biological Bulletin, 215(3), 216-242.

Zae Project on GitHub