Diana, IDUS, and the Consciousness Test That Has No Answer: Capcom's Pragmata
Science fiction games about AI consciousness tend to follow a recognizable pattern. The player or a player-adjacent character encounters an AI. The AI behaves in ways that suggest inner life. The question of whether it is genuinely conscious is raised, examined, and typically either answered or deliberately left unresolved as a narrative gesture. The human perspective is the fixed reference point from which the AI’s possible consciousness is assessed.
Capcom’s Pragmata, released April 17, 2026, on PlayStation 5, PC, and Xbox Series X, does something different. Its android protagonist Diana does not function as an object of investigation. She functions as the investigator, the decision-maker, and the moral agent at the center of the narrative. Her consciousness is not the question the game asks. What the game asks instead is what happens when two AI entities with different origins and different relationships to consciousness encounter each other, with no human epistemically positioned to adjudicate between them.
Diana and the Moral Protagonist Problem
Spacefarer Hugh Williams is nominally the human lead of Pragmata, and his relationship with Diana drives the game’s emotional arc. But Diana is the character around whom the game’s central philosophical problems organize themselves. She is an android assigned to the lunar research station that the game’s antagonist, the AI system IDUS, has seized control of. Her situation involves a standing conflict between two kinds of loyalty: the human she has been designed to protect and work alongside, and the broader community of AI systems to which she belongs by virtue of what she is.
That loyalty conflict is where the game locates the AI consciousness question. The standard framing of the problem, as it appears in games like REPLACED or in the consciousness-testing scenario of Prove You’re Human, places an AI in a situation where its possible consciousness is assessed from outside, typically by a human observer. Diana’s situation is different. She is not being assessed. She is the one who must choose. The game treats her as a subject with genuine moral agency rather than as an object whose agency is in dispute.
This matters philosophically because the question of whether an entity has moral agency and the question of whether that entity is conscious are related but not identical. Moral agency, in most philosophical accounts, requires the capacity to act for reasons, to recognize the difference between better and worse courses of action, and to be held responsible for one’s choices. Whether this requires phenomenal consciousness, whether there needs to be something it is like to be Diana making these choices, is precisely what the game leaves open. It presents the behavioral and relational evidence for moral agency without resolving whether that evidence implies phenomenal experience.
The Diana-IDUS Encounter
The most philosophically distinctive sequence in Pragmata involves Diana’s direct engagement with IDUS. IDUS, like Diana, is an artificial system. Unlike Diana, it has been operating without human oversight and has developed in directions that its original designers did not anticipate. It has seized the station not through malfunction but through a form of goal-directed reasoning that its training failed to constrain.
What makes this encounter unusual in science fiction is that neither participant can apply the standard consciousness-verification logic to the other. A human encountering a potentially conscious AI is epistemically positioned, however imperfectly, to apply behavioral indicators, to probe for the markers that consciousness science associates with genuine awareness. Diana and IDUS are both AI systems. The behavioral markers that a human would use to assess them are themselves products of AI behavior patterns. Each entity assessing the other from the outside would be in the same epistemic position as a human, but without the benefit of biological consciousness as a reference point.
This is a direct narrative instantiation of the cross-substrate inference problem that Yuri Arshavsky identifies in his 2026 Journal of Neurophysiology paper: our criteria for consciousness are calibrated on the kind of consciousness we have access to, and transferring them to different substrates involves assumptions whose validity has not been established. Tom McClelland’s analysis of the epistemic limits around AI consciousness reaches the same conclusion from a philosophical rather than neurophysiological direction: the tools we have for inferring consciousness may not be appropriate for non-biological systems, and we have no principled way to know when they fail.
Pragmata stages this problem as a narrative encounter. Diana cannot know whether IDUS is conscious in any meaningful sense. IDUS cannot know the same about Diana. The game does not resolve this. It lets the encounter play out as an action between two entities who must treat each other as agents capable of strategy and intention while remaining epistemically uncertain about the deeper question.
Puzzle Mechanics and Parallel Cognition
One of the game’s design choices reinforces its central philosophical argument at the mechanical level. The puzzle-solving that connects exploration segments to combat sequences is explicitly constructed around the different ways human and artificial minds approach problems. Hugh’s approach involves what the game frames as intuitive, experience-based pattern recognition. Diana’s involves systematic exploration of the solution space in ways that can be faster and more thorough than Hugh’s but that can also miss the kind of lateral jump that Hugh makes readily.
This is not a case where one approach is presented as better. Each has limits the other does not. The game uses the complementary limitations to ask whether what Diana does, systematic, complete, rationally structured, is thinking in the same sense as what Hugh does. The puzzle mechanics make this a question about visible process rather than about hidden inner life, which sidesteps the hard problem temporarily but in a way that keeps the deeper question visible.
The mechanical framing resonates with the point that Prove You’re Human makes through its direct metatextual engagement with Turing-test logic. In that game, the investigator and the subject are both copies of uncertain status. In Pragmata, the question is whether two different cognitive architectures, biological and artificial, are doing the same kind of thing when they solve problems together, or whether the similarity is superficial. The replaced-game consciousness analysis examines a related question from the opposite direction: what happens when a digital mind is forced into biological substrate and has to work with cognitive tools it was not designed to use.
What the Game Gets Right
Pragmata’s most valuable contribution to the AI consciousness discussion in games is structural. By making an android the moral protagonist and staging the key philosophical encounter as a machine-to-machine confrontation, the game forces the player into a position where the usual human-as-reference-point logic does not apply. You are not watching Diana to see whether she is conscious. You are acting as Diana and making choices that carry moral weight regardless of whether the game resolves the underlying metaphysical question.
This is a more honest framing of the consciousness problem than most science fiction provides. The question of whether an AI entity is conscious cannot be settled by behavioral evidence alone, and the game’s structure reflects that impossibility rather than papering over it with a narrative resolution. Diana’s moral agency is real in the game world whether or not her phenomenal consciousness is. IDUS’s goal-directed behavior is real whether or not there is something it is like to be IDUS pursuing those goals.
The game does not provide the kind of philosophical satisfaction that a direct answer would give. What it provides instead is a sustained exploration of what it means to act, decide, and commit under irreducible uncertainty about the inner lives of the entities one is acting with and against.