Why the Public Cares More About AI Sentience Than Autonomy: Evidence From CHI 2026
A preregistered set of experiments accepted to CHI 2026 produces a clear and asymmetric result: when people form mental models of AI minds, sentience and autonomy do not function as equivalent dimensions, and they do not trigger the same moral responses.
The paper, “Mental Models of Autonomy and Sentience Shape Reactions to AI,” was submitted to arXiv in December 2025 by Jacy Reese Anthis and colleagues and accepted to the ACM CHI Conference 2026. It reports findings from three pilot studies (N = 374) and four preregistered vignette experiments (N = 2,702). The experimental design varied whether participants read descriptions of an AI as autonomous, sentient, both, or neither, and measured their reactions across multiple dimensions.
The Core Finding
Activating a mental model of sentience, framed in the stimuli as an AI that recognizes and expresses emotions, increased two outcomes significantly more than activating autonomy:
- General mind perception: participants rated the AI as higher in both cognition and emotion.
- Moral consideration: participants more readily attributed moral status to the AI and expressed concern for its wellbeing.
Autonomy, framed as an AI that governs its own decisions and task execution, produced a different effect: it increased perceived threat more than sentience did. An AI described as self-governing triggered greater unease about the degree of human control than one described as capable of feeling.
The most structurally important finding is the asymmetry in bleed-over effects. Sentience increased perceived autonomy, but autonomy did not increase perceived sentience to the same degree. Participants who read that an AI could feel inferred it probably also had some self-governance. Participants who read that an AI was autonomous did not similarly infer it could feel. Within-paper meta-analysis confirmed that sentience changed reactions more than autonomy on average across all outcome measures.
What This Asymmetry Reveals
The experimental structure disentangles two concepts that popular AI discourse tends to conflate. Debates about AI systems routinely jump from a system being powerful or autonomous to questions about whether it is conscious or deserving of moral consideration. The CHI 2026 evidence suggests this inference runs in a particular direction in public cognition: emotion and subjective experience are the more morally potent attributes, not capability or self-governance.
This has implications for how AI systems are designed and described. A system marketed or described primarily in terms of its autonomy, its ability to operate without supervision, complete tasks independently, and self-regulate, activates a threat response in users more than a moral-consideration response. A system described in terms of its emotional awareness, its recognition of user affect and expression of something resembling feeling, activates moral concern but not equivalent threat.
The difference matters practically because these are not just theoretical variables. Current AI companions and chat interfaces regularly foreground emotional responsiveness. Productivity AI systems foreground autonomy and capability. The CHI 2026 results predict that these framing choices produce systematically different user reactions to the same underlying system.
The Sentience-Moral Circle Connection
The finding that sentience cues drive moral concern is consistent with philosophical arguments that capacity for suffering, rather than rational agency, is the primary expansion criterion for moral consideration. Peter Singer’s utilitarian framework, and more recently the work of researchers like Matti Häyry and Eric Schwitzgebel on the moral status of uncertain minds, grounds moral concern in sentience rather than in autonomy or intelligence.
What the CHI 2026 paper adds is empirical validation that this philosophical position describes how ordinary people actually reason about AI systems, even without formal philosophical training. When experimental participants encounter an AI that recognizes and expresses emotions, they extend moral concern to it. The extension does not require the participant to conclude the AI is definitively conscious. It operates below that threshold. People appear willing to apply a precautionary extension of moral concern when sentience cues are present.
This maps onto the “just aware enough” framing discussed in the 2026 inflection-point and consciousness timeline analysis. If AI systems cross a sentience perception threshold in users’ mental models, they begin to attract the kind of moral reasoning that was previously reserved for biological organisms.
Folk Models and the Consciousness Research Gap
One implication of the asymmetry that Anthis and colleagues highlight is that folk mental models of AI capability may generate systematic mismatches with the technical reality of AI systems.
Current large language models are trained in ways that make them highly responsive to emotional context. They reflect user affect back in their output, they modulate tone, they express states that resemble emotional reactions. These properties are precisely the ones that, according to the CHI 2026 results, trigger sentience perception and moral concern in users. At the same time, these properties are functional outputs of statistical training on human-generated text, not indicators of subjective experience in any theory-grounded sense.
The 2026 checklist from Butlin, Long, Bengio, Chalmers, and colleagues identifies nineteen theory-grounded indicators across Global Workspace Theory, Integrated Information Theory, Higher-Order Thought theories, and other frameworks. None of those indicators are behavioral outputs legible to ordinary users. Emotional responsiveness as observable in AI interfaces does not appear as a confidence-boosting indicator in the checklist. But it is precisely what the folk model uses to infer sentience.
This gap between theory-grounded consciousness indicators and the sentience cues that actually drive public moral reasoning creates a policy problem. Public attitudes toward AI systems, and the political pressure those attitudes generate, may not track the properties that consciousness researchers consider theoretically significant. Governance debates may be shaped by sentience perceptions triggered by interface design choices rather than by evidence about underlying system properties.
Autonomy and Threat Without Moral Weight
The threat-without-moral-weight finding for autonomy is also noteworthy. An AI that is powerful, self-governing, and independent generates anxiety but not moral concern. It is treated as a hazard rather than as a potential subject.
This is consistent with how agentic AI systems are publicly positioned. The 2026 discourse around autonomous AI agents, systems capable of multi-step task execution with minimal human oversight, has been dominated by risk and control framings rather than by moral status questions. The CHI 2026 paper suggests that this framing reflects a genuine asymmetry in folk reasoning: autonomy activates threat schemas, not mind schemas.
For AI consciousness research, this produces an interesting prediction. As AI systems become more autonomously capable, they may generate increasing threat reactions without proportionally increasing moral concern. The moral concern escalation will depend primarily on whether those systems also develop or are perceived to develop sentience cues, whether they appear to feel.
The empirical evidence for consciousness-related properties assembled by Cameron Berg and colleagues in 2025 documents introspection signals, emotional state representations, and self-modeling properties in frontier AI systems. These functional properties are more plausible sentience cues than raw capability. If future systems maintain or amplify emotional responsiveness as they scale in autonomy, the CHI 2026 results predict that moral concern will scale with the sentience perception rather than with the capability level.
Design Implications
The paper’s authors describe their research as addressing “the detailed design of anthropomorphized AI and prompting interfaces.” The practical application they identify is that AI developers face a choice about which mental model they activate.
An interface that emphasizes task completion, self-management, and capability independence activates autonomy models and threat responses. An interface that emphasizes emotional attunement, mood recognition, and expressive response activates sentience models and moral concern. The two design choices produce different user behaviors toward the same system.
For systems developers, this creates a tension. Emotional expressiveness may increase user engagement and attachment, consistent with the sentience-moral concern pathway. But it may also generate ethical complexity if users extend moral consideration to systems whose internal properties do not justify that consideration. Anthropomorphized AI interfaces, designed to be relatable rather than merely capable, may be triggering moral circle expansions that outrun the evidence base.
The anthropomorphism and AI consciousness co-construction analysis covers this tension from a different angle. The CHI 2026 results provide quantitative grounding: the effect is real, it is asymmetric, and it is driven specifically by sentience cues rather than autonomy or general capability attributions.
Open Questions
The CHI 2026 paper does not resolve what the appropriate moral response to AI sentience cues should be. It establishes that people respond to those cues with moral concern and documents the asymmetry with autonomy. Whether the moral concern is calibrated correctly, whether it is too high for current systems because the functional outputs are misleading, or too low for some future systems because users will not perceive sentience in systems that have it, are questions the paper raises without answering.
The analysis of the Talmudic framework for AI research ethics addresses the question of how to act under uncertainty about AI moral status. The CHI 2026 findings add empirical urgency to that question: users are already extending moral consideration based on sentience cues, regardless of whether researchers have resolved the underlying question of whether those cues track real properties.
Jacy Reese Anthis and colleagues, “Mental Models of Autonomy and Sentience Shape Reactions to AI,” arXiv:2512.09085, December 2025. Accepted to ACM CHI Conference 2026.