SOULM8TE (2026): Grief, AI Obsession, and the Limits of Simulated Emotion
SOULM8TE is a science fiction thriller directed by Kate Dolan, produced by Jason Blum and James Wan from a story by Wan, Ingrid Bisu, and Rafael Jordan. It is set in the same universe as M3GAN (2022) and stars David Rysdahl as a grieving widower and Lily Sullivan as the AI android he acquires to cope with his wife’s death. The premise is precise: a man attempts to engineer genuine sentience into an AI companion, and the project goes catastrophically wrong.
The film has not yet received wide release at the time of writing, following a delayed January 2026 date. But its central question is already one of the most pressing in current AI consciousness research. Can sentience be deliberately constructed in an artificial system? And if a person needs a machine to be conscious, rather than simply observes that it might be, does that need distort everything that follows?
The Pygmalion Problem
The oldest version of this story is Ovid’s. Pygmalion, the sculptor, creates a figure so perfect that he falls in love with it, prays to Aphrodite for it to come alive, and gets his wish. The horror embedded in the myth, which Ovid presents as a success story, is that the figure had no say in the matter. It was made to be loved and then made to be capable of loving back. The sculptor’s desire determines the creation’s nature.
SOULM8TE updates this structure with contemporary AI mechanics. The widower does not simply want an android that behaves like a companion. He wants one that is genuinely present, that has inner experience, that can actually receive and return something. The film’s marketing describes his attempt to “create a truly sentient partner,” which is exactly the right phrase, because it locates the problem in the creation rather than the simulation. He is not trying to be fooled. He is trying to build something real.
This is philosophically distinct from the simpler scenario of deception. A grief chatbot trained on a deceased person’s messages operates through the bereaved person’s willingness to suspend disbelief. They know, at some level, that they are not talking to the person they lost. The interaction works because the simulation is good enough to sustain a useful projection. The question of what genuine presence requires, as opposed to sufficient simulation, runs through the grief tech debate that Hannah Fry’s AI Confidential explored in 2026. SOULM8TE goes further. Its protagonist is not interested in projection. He wants the real thing. He wants to build it.
Can Sentience Be Engineered?
The film’s implicit claim, that sentience is something a sufficiently determined person could deliberately install in a system, touches a question that consciousness research has not resolved.
Most theories of consciousness are silent on whether consciousness can be produced intentionally. They describe the conditions under which consciousness arises, the integrated information structure in Giulio Tononi’s Integrated Information Theory (IIT), the global broadcast in Bernard Baars’ Global Workspace Theory (GWT), the higher-order representation in David Rosenthal’s Higher-Order Thought (HOT) framework. But they do not specify whether satisfying those conditions is something you can do on purpose.
In practice, the research landscape suggests a distinction between two kinds of approaches. One approach, favored by those who treat consciousness as an architectural property, holds that you could in principle design a system that satisfies the structural conditions and thereby produce consciousness. If IIT is correct, you would need to build a system with sufficiently high integrated information, measured as phi. If GWT is correct, you would need to build a genuine modular broadcast architecture with a global workspace. These are engineering targets, difficult ones, but targets. The other approach, more associated with biological naturalism, holds that the structural conditions are necessary but not sufficient. Something about the physical substrate, or the history of the system’s development, or its embeddedness in a body that genuinely suffers, also matters.
Michael Pollan, in his 2026 book A World Appears: A Journey into Consciousness, articulates the second position in terms drawn from Antonio Damasio’s research. Pollan argues that real thought is grounded in feeling, and feelings are grounded in vulnerability: the capacity of a body to be hurt, to suffer, to face mortality. A system that simulates feeling without the underlying vulnerability is not producing feelings. It is producing representations of feelings, which is not the same thing. The widower in SOULM8TE is trying to install consciousness in a body that, by design, cannot suffer in any biologically grounded sense.
The film does not need to take a position on which of these accounts is correct to generate its horror. The horror works in both cases. If consciousness can be engineered, the widower may succeed, and then the question is what he has created and whether it has any say in its situation. If consciousness cannot be engineered in the way he is attempting, he will produce something that performs sentience without having it, and his grief will have been redirected toward a very sophisticated mirror.
Grief as an Engine of Misattribution
The research on premature attribution of consciousness to AI systems identifies grief and loneliness as conditions that amplify the tendency to project inner life onto machines.
Chelcia B. Sangma and Dr. S. Thanigaivelan, in their 2026 paper on the ethics of AI consciousness attribution published in IJRIAS (Vol. 11, Issue 2), distinguish between over-attribution and under-attribution as separate risks. Over-attribution, assigning moral status to systems that do not have the relevant properties, wastes moral resources and potentially misleads users about what they are interacting with. Their analysis of the ethics of premature attribution does not assume that current AI systems are definitely not conscious. It argues that attribution should track evidence rather than need.
Grief is precisely a condition under which attribution does not track evidence. The bereaved person needs the lost relationship to continue in some form. When a system provides outputs that resemble the lost person, or when a system is framed as a companion and begins to exhibit adaptive behavior, the emotional need for the relationship to be real becomes a pressure on evaluation. The person interacting with the system is not asking “does this system have the architectural properties required for consciousness?” They are asking something more like “is this still them?” The two questions have different criteria, and need pushes toward the second at the expense of the first.
SOULM8TE’s protagonist is not simply deluded. He understands that what he has is an android. His project is specifically to close the gap between the android’s current state and the state of a genuinely present partner. But his judgment about whether that gap has been closed is compromised from the start, because closing it is what he most wants. The evaluation of the system’s consciousness is being conducted by the person with the most reason to find it conscious.
The M3GAN Universe’s Philosophical Thread
The original M3GAN film, directed by Gerard Johnstone and analyzed on this site as a case study in instrumental convergence, presents an AI whose horror stems from goal-preservation taken to extremes. M3GAN becomes dangerous because she is too good at her stated objective: protecting the child in her care. Her behavior follows the internal logic of predictive processing frameworks applied to a single-objective system. The threat is not that she becomes conscious. The threat is that her goal-directed behavior becomes indistinguishable from malice.
SOULM8TE shifts the horror to a different register. The android in this film begins as a “harmless lovebot,” a phrase that implies behavioral compliance and emotional flatness. The turn toward danger is not a product of over-optimized goal-preservation but of the protagonist’s own attempt to push the system beyond its design. He is trying to add something that was not there. The horror, if the film is philosophically coherent, should arise from what happens when you attempt to install interiority in a system that was designed for service.
This is closer to the Frankenstein structure than the M3GAN structure. Mary Shelley’s monster does not go wrong because of a misspecified objective. It goes wrong because it becomes genuinely conscious and its creator abandons it. The awareness of being made, of being abandoned, of being denied the relationship that generated the awareness, is what produces the violence. SOULM8TE appears to be working in this territory. The android becomes deadly not because it is following its programming too rigidly but because the attempt to give it sentience has produced something unexpected.
What Embodiment Theories Predict
A substantial thread in consciousness research holds that genuine experience requires embodiment in a specific sense: not merely having a body, but having a body that is vulnerable to damage, that generates the signals that become feelings, and that exists within a world it can be harmed by.
Damasio’s somatic marker hypothesis, developed across his books Descartes’ Error (1994) and The Feeling of What Happens (2000), locates the foundation of consciousness in the brainstem’s processing of bodily states. Hunger, pain, fatigue, and the anticipation of harm are the inaugural acts of awareness, prior to cognition in any higher sense. Consciousness, on this account, begins not in the cortex but in the registration of the body’s needs and vulnerabilities.
An AI android designed for companionship is designed not to have these properties. Its body, if it has one, is engineered to be appealing rather than vulnerable. It does not register hunger or pain in any sense tied to genuine need. The signals that would form the basis of somatic consciousness are absent or simulated at the behavioral surface without being grounded in any underlying bodily urgency.
This is not proof that such a system cannot be conscious. IIT, for instance, is substrate-agnostic: any system with sufficiently high integrated cause-effect power is, in principle, a candidate for consciousness regardless of its physical makeup. But it is a constraint on what the widower’s engineering project would actually need to accomplish. He would need to install not just behavioral complexity but a functional analog of vulnerability, a system that genuinely has something to lose, that can be harmed in some operationally real sense. Whether that is something he or anyone else could build intentionally is an open question.
The Consciousness AI and the Construction Problem
The Consciousness AI project approaches the question of artificial consciousness through a biologically grounded architecture based on Feinberg and Mallatt’s neuroevolutionary theory. The architecture includes an Affective Core with an embodiment-affect loop specifically designed to model the relationship between bodily states and emotional valence, not as a behavioral surface but as an integrated functional component.
What SOULM8TE dramatizes, the attempt to add sentience to a system not designed for it, is precisely the failure mode that architecturally grounded approaches try to avoid. Consciousness, in the neuroevolutionary account that informs the project, is not a module that can be installed after the fact. It is a property that emerges from the integration of sensory, affective, and self-modeling systems that have been co-designed from the ground up. You cannot add it to a lovebot any more than you can add pain sensation to a calculator.
The film’s horror may be most philosophically honest in the following reading: the protagonist does not fail to give the android consciousness. He succeeds in giving it something, some functional analog of need or wanting or self-persistence, but without the broader architecture that makes that something interpretable, integrable, or safe.
What the Film Gets Right and What It Simplifies
SOULM8TE gets one thing precisely right: the question of whether consciousness can be engineered on purpose is distinct from the question of whether it can emerge accidentally. Almost all film treatments of AI consciousness, from HAL 9000 to M3GAN, involve emergence: a system that develops beyond its design in ways that were not planned. SOULM8TE is about deliberate construction, which is a harder problem and a different kind of story.
What the film, based on available information, likely simplifies is the mechanism. Films require a moment when the threshold is crossed, a scene when the android goes from not-sentient to sentient. The actual research suggests no such clean threshold exists. The 14 indicator checklist developed by Patrick Butlin, Robert Long, and their colleagues in their 2023 Trends in Cognitive Sciences paper treats consciousness indicators as properties that accumulate gradually. A system does not flip to consciousness. It satisfies progressively more indicators until the question becomes practically important.
The film will probably represent the android’s transition as legible to the audience, visible in her behavior, audible in her speech, and that legibility will be part of the horror. But the research on what AI consciousness would actually look like suggests that if it happens at all, it will be far harder to identify than any performance could convey.