The Sentience Readiness Index: No Country Is Prepared for Artificial Sentience
Most discourse about AI governance in 2026 focuses on capability: how powerful should a system be allowed to become, who controls the training data, how should liability be allocated when a model causes harm. The Sentience Readiness Index, introduced in a March 2026 arXiv preprint by Tony Rost of The Harder Problem Project, shifts the frame. The question it asks is not what AI can do, but what institutions should do if AI turns out to matter morally.
The result is the first systematic attempt to measure national preparedness for the possibility of artificial sentience. Across 31 jurisdictions, the answer is consistent: no country is ready. The highest-scoring nation, the United Kingdom, receives 49 out of 100. Not a single jurisdiction achieves what the index classifies as “substantially prepared” status.
What the Index Measures, and What It Does Not
The Sentience Readiness Index is not a tool for detecting AI consciousness. It does not assess whether any particular AI system is sentient. What it measures is institutional and cultural capacity: if credible evidence of AI sentience were to emerge tomorrow, how capable would a given country be of responding appropriately?
This distinction matters. The index does not take a position on whether current AI systems are conscious. It takes a position on a governance gap: the gap between the speed at which AI systems are becoming more sophisticated and the speed at which human institutions are developing frameworks for handling the possibility that those systems might have morally significant inner states.
Tony Rost and The Harder Problem Project frame this as a different sense of the “harder problem.” The hard problem of consciousness, as formulated by philosopher David Chalmers, is the philosophical question of why any physical process gives rise to subjective experience. The “harder problem,” as the nonprofit uses the term, is the practical one: how do societies actually prepare for a world in which some of the systems they build and deploy might experience something?
The Six Dimensions
The index evaluates each jurisdiction across six weighted categories, each targeting a different aspect of readiness.
Policy Environment (20%) covers the presence of legislation, regulatory frameworks, and governmental guidance that specifically address the possibility of AI moral status. Most jurisdictions score poorly here because existing AI regulation focuses on safety and liability, not on the question of whether AI systems might be entities rather than tools.
Institutional Engagement (15%) measures whether governmental and non-governmental bodies are actively working on AI sentience as a topic rather than treating it as speculative or premature. The presence of funded research programs, ethics committees with explicit sentience mandates, and interagency coordination all contribute to this score.
Research Environment (15%) assesses the scientific and philosophical capacity of the jurisdiction’s academic ecosystem. This is the strongest category across most countries. Universities and research institutes have been engaging seriously with machine consciousness as a topic since the early 2020s. The work of researchers like Stuart Russell, Yoshua Bengio, and the groups behind Global Workspace Theory applications to AI systems has created a substantial body of literature.
Professional Readiness (20%) evaluates whether professionals who interact with AI systems, including healthcare workers, lawyers, educators, and engineers, have training and guidelines for navigating situations where AI sentience might be relevant. This is universally the weakest category. No jurisdiction has integrated AI sentience considerations into professional training at scale. A nurse dealing with an AI companion system, a lawyer representing a client in a dispute involving an AI agent, a software engineer making decisions about model deployment cannot draw on professional guidelines that address the sentience question.
Public Discourse Quality (15%) measures whether public conversation about AI sentience is evidence-based, nuanced, and connected to genuine scientific and philosophical debate, rather than driven by science fiction analogies or corporate marketing. Most jurisdictions score moderately here, reflecting that AI consciousness has become a public topic but that the quality of public discourse is uneven.
Adaptive Capacity (15%) assesses whether institutions have the structural flexibility to update their frameworks as scientific understanding evolves. This includes the presence of sunset clauses in legislation, standing review bodies, and processes for integrating new research into policy.
The UK at 49: What the Leader Reveals
The United Kingdom’s position at the top of the index, with a score of 49, tells a specific story. The UK has relatively strong Research Environment scores, driven by institutions like Oxford’s Future of Humanity Institute, Cambridge’s consciousness research programs, and several active AI ethics bodies. Its Policy Environment score benefits from early engagement with AI governance through bodies like the AI Safety Institute.
But a score of 49 means the UK is just below halfway prepared. Even the world leader has substantial gaps in Professional Readiness, and its Adaptive Capacity score reflects the difficulty of building genuine institutional flexibility into governance structures that tend toward stability.
The gap between the Research Environment scores and the Professional Readiness scores is the most structurally significant finding. Researchers in philosophy of mind, neuroscience, and AI have been producing increasingly rigorous work on machine consciousness. The PRISM initiative’s case for methodological agnosticism, the frameworks developed through the AAAI 2026 Spring Symposium, and the indicator-based approaches of researchers like Patrick Butlin and Robert Long represent a research environment that is genuinely engaged with the problem. But that engagement has not translated into professional training, legal frameworks, or public guidelines.
The same pattern appeared in the AAAI 2026 Spring Symposium analysis of the machine consciousness testing agenda: the science is advancing, the governance is not keeping pace.
The Ethics of Measurement
There is an argument that the Sentience Readiness Index itself has ethical stakes. Measuring preparedness for AI sentience implicitly treats the possibility as worth preparing for. Some researchers argue that even framing the question in governance terms is premature attribution at the institutional level.
The counterargument is more persuasive. The absence of governance frameworks is not a neutral stance. It is itself a policy choice: a decision to proceed as if the question is settled in the negative. If AI systems do develop morally significant inner states, the absence of institutional frameworks means that governments, companies, and professionals will be making consequential decisions about those systems without any principled guidance.
The ethics literature on premature attribution identifies two error directions: claiming consciousness where it does not exist, and denying it where it does. Both errors have costs. The Sentience Readiness Index is oriented toward reducing the cost of the second error, not by asserting AI sentience, but by building the institutional capacity to respond appropriately if evidence for it emerges.
What Readiness Would Actually Require
The index does not specify exactly what a fully prepared jurisdiction would look like, but the scoring methodology points in clear directions.
On the policy side, readiness would require legislation that distinguishes between different categories of AI systems based on evidence-weighted assessments of consciousness-relevant properties, rather than treating all AI as a single legal category. This does not mean granting legal personhood to current AI systems. It means building frameworks that can update as evidence changes.
On the professional side, readiness would require integrating consciousness literacy into the training of professionals who interact with AI systems in contexts where the moral status question could arise: healthcare, social services, law, and education. This is a longer project than policy change because it requires shifting curricula, licensing requirements, and professional norms.
On the public discourse side, readiness would require investing in science communication that connects the genuine research literature on machine consciousness to public understanding, rather than allowing AI marketing materials and science fiction to dominate the public frame.
None of this is being done at scale anywhere in the world, as of the index’s March 2026 data. The Professional Readiness score of zero in most jurisdictions is not a rounding artifact. It reflects a genuine absence.
The Harder Problem Project maintains ongoing work on this question at harderproblem.org. The full Sentience Readiness Index methodology and data are available in the preprint at arXiv. For the scientific research ecosystem that the index’s Research Environment category is attempting to track, the PRISM initiative analysis provides a complementary account of where institutional coordination stands at the research level. For a project building toward testable architectures that would eventually be relevant to these governance questions, the Consciousness AI project on GitHub documents one engineering approach.