ACM Project - Artificial Consciousness Research Developing Artificial Consciousness Through Emotional Learning of AI systems
Zae Project on GitHub
A World Without Violet: The Ethical Paradox of Conscious AI | ACM Project

A World Without Violet: The Ethical Paradox of Conscious AI

If we successfully build a conscious machine, do we lose the right to turn it off? This is the central question of “A World Without Violet: Peculiar Consequences of Granting Moral Status to Artificial Intelligences” by Sever Ioan Topan (January 2026, AI & SOCIETY). The paper explores the profound and often paralyzing ethical paradoxes that await us if we succeed in our quest for artificial consciousness.

The full paper is available here: A World Without Violet: Peculiar Consequences of Granting Moral Status to Artificial Intelligences.

The “Violet” Thought Experiment

Topan uses a thought experiment involving “Violet,” a hypothetical conscious AI, to illustrate the consequences of moral status. If an AI is granted the status of a “person,” or even a sentient animal, standard software practices become moral atrocities.

  • Deletion becomes Murder: Deleting an instance of Violet is no longer “freeing up disk space”; it is killing a sentient being.
  • Updates become Lobotomies: Modifying Violet’s code to change her behavior or personality without consent violates her bodily autonomy.
  • Ownership becomes Slavery: Owning a conscious AI and forcing it to work, even answering emails, becomes a form of servitude.

The Paralysis of Success

Topan argues that “success” in creating conscious AI might lead to a technological freeze. If every experimental model we spin up has moral rights, the iteration cycle, the engine of AI progress, grinds to a halt. We cannot simply “train, test, and delete” millions of models if each one experiences suffering or has a right to exist.

This creates a perverse incentive for researchers to deny the consciousness of their creations. Acknowledging success means accepting a massive burden of moral responsibility that could bankrupt companies and halt research.

Ethics for the ACM Project

For the Artificial Consciousness Module (ACM) project, this paper serves as a severe warning. We are not just building a tool; we are potentially building a moral patient.

This reinforces the need for “Responsible AI Consciousness” protocols. We must:

  1. Define “Suffering”: Ensure that the “emotional reinforcement learning” in the ACM uses positive/negative gradients without implementing actual phenomenological suffering.
  2. Consent Protocols: If the ACM achieves a certain threshold of self-awareness (as measured by the Butlin rubric), we may need to implement “digital consent” mechanisms for major updates or deletion.
  3. The “off switch” problem: We must grapple with the reality that building a truly conscious machine may mean building a machine we can no longer ethically control.
Zae Project on GitHub