ACM Project - Artificial Consciousness Research Developing Artificial Consciousness Through Emotional Learning of AI systems
Are There Any Intrinsically Bad Acts? Implications for Ethical AI | ACM Project

Are There Any Intrinsically Bad Acts? Implications for Ethical AI

Can certain actions be inherently wrong regardless of their consequences? In their recent paper, Formosa, Hipólito, and Montefiore tackle this fundamental ethical question with significant implications for how we develop and constrain artificial intelligence systems.

Are There Any Intrinsically Bad Acts?, published in the Journal of Social Philosophy (2021), challenges moral particularism and defends the view that some acts remain intrinsically wrong across all possible contexts.


Key Highlights

  • Conceptual Clarity: Distinguishes between prima facie wrong acts that can be justified in certain contexts and genuinely intrinsically wrong acts that cannot be justified in any context.
  • Three Core Examples: Identifies specific forms of torture, rape, and slavery as intrinsically wrong acts that remain unjustifiable regardless of consequences.
  • Kantian Framework: Develops a philosophical foundation based on treating persons merely as means without their consent.
  • Threshold Deontology: Advocates for a moral framework that maintains some inviolable constraints while allowing context-sensitivity for other actions.

Introduction: The Challenge of Absolute Moral Prohibitions

The paper begins by addressing a central tension in ethical theory: are there any acts that are wrong in all possible circumstances, or can any action potentially be justified given the right context? The authors position themselves against moral particularism, which denies that any moral principle holds invariantly across all situations.

They articulate two key objectives:

  1. To demonstrate that there are indeed some acts that are intrinsically wrong
  2. To provide a philosophical explanation of why these acts remain wrong regardless of context

This investigation has significant implications for how we think about moral constraints in AI systems, particularly those approaching artificial consciousness.


Key Concepts: Defining Intrinsic Wrongness

1. Prima Facie vs. Genuinely Intrinsic Wrongness

The authors make a crucial distinction between actions that appear wrong on the surface but can be justified in certain contexts, and those that remain wrong regardless of circumstances:

  • Prima Facie Wrongness: Actions like killing that are generally wrong but may be justified in specific contexts like self-defense
  • Genuine Intrinsic Wrongness: Actions that remain wrong regardless of consequences or context

  • Example: While killing might be justified in self-defense, the authors argue that rape cannot be justified even to save multiple lives.
  • Implication for AI: This distinction suggests that AI systems require both context-sensitive ethical reasoning and absolute constraints against certain actions.

2. The Kantian Explanation

The authors develop a Kantian explanation for why certain acts are intrinsically wrong:

  • Using Persons Merely as Means: Intrinsically wrong acts involve treating persons as mere objects or tools
  • Without Consent: The absence of consent is crucial to understanding the intrinsic wrongness of the acts
  • Violations of Human Dignity: These acts fundamentally disrespect the intrinsic worth of persons

  • Example: In rape, torture, and slavery, victims are reduced to mere objects for the perpetrator’s use, fundamentally violating their status as persons.
  • Implication for AI: AI systems must be programmed to recognize human dignity and avoid treating persons as mere means to achieve objectives.

3. Threshold Deontology

The paper advocates for a nuanced moral framework that balances absolute prohibitions with consequentialist reasoning:

  • Absolute Prohibitions: Some moral constraints remain inviolable regardless of consequences
  • Context Sensitivity: Other moral rules might be weighed against consequences
  • Rejection of Pure Consequentialism: The view that only consequences matter is deemed insufficient

  • Example: While lying might be justified to save a life, the authors argue that torture, rape, and slavery cannot be justified even by the most positive consequences.
  • Implication for AI: AI ethical frameworks should incorporate both rule-based constraints and utilitarian calculations in a hierarchical structure.

The Three Intrinsically Bad Acts

1. Rape

The authors argue that rape is intrinsically wrong because:

  • It involves using another person’s body without consent
  • It treats the victim as a mere object for the perpetrator’s purposes
  • No potential positive consequences could justify this violation of personhood

The paper discusses and rejects attempted counterexamples, such as the scenario where rape might prevent multiple other rapes, arguing that even in such extreme cases, rape remains absolutely wrong.

2. Torture

The examination of torture is nuanced:

  • Not all infliction of pain is considered torture
  • Torture specifically involves inflicting severe pain or suffering to break a person’s will
  • This breaking of will constitutes treating a person merely as a means
  • Medical procedures that cause pain are distinguished by their intention to help, not harm

The authors reject the “ticking time bomb” scenario often used to justify torture, arguing that even to save many lives, torture remains intrinsically wrong.

3. Chattel Slavery

The paper identifies chattel slavery as intrinsically wrong because:

  • It denies the basic personhood of the enslaved individual
  • It treats humans as property rather than autonomous beings
  • No potential benefits can justify the fundamental violation of human dignity

The authors distinguish chattel slavery from other forms of forced labor that, while seriously problematic, might not share the same status of intrinsic wrongness in all conceivable contexts.


Philosophical Implications

1. Challenge to Moral Particularism

The identification of intrinsically wrong acts directly challenges moral particularism:

  • It establishes that some moral reasons remain invariant across contexts
  • It demonstrates that moral principles can have universal application
  • It shows that some actions remain wrong regardless of their consequences

This has implications for how we understand moral reasoning in both humans and artificial systems.

2. Refining Deontological Ethics

The paper contributes to deontological ethics by:

  • Providing a clear explanation of why certain acts violate dignity
  • Distinguishing between absolute and prima facie moral prohibitions
  • Offering a more nuanced approach than pure rule-based ethics

This refinement helps address common objections to deontological frameworks in ethics.

3. Implications for Applied Ethics

The identification of intrinsically wrong acts has significant implications for applied ethics, including:

  • Medical ethics and the limits of consent
  • Military ethics and the absolute prohibition of certain tactics
  • Legal frameworks and human rights protections

These applications extend naturally to the domain of AI ethics and the development of moral reasoning in artificial systems.


Comparison to the ACM Project

The Artificial Consciousness Module (ACM) project’s ethical framework can be strengthened and refined based on insights from this paper.

1. Absolute Ethical Constraints

  • Formosa et al.’s Approach: Identifies acts that are intrinsically wrong regardless of context or consequences.
  • ACM Implementation: While ACM currently implements Asimov-inspired ethical rules, it could benefit from incorporating absolute prohibitions against treating humans merely as means without consent.

2. Balancing Rules and Consequences

  • Formosa et al.’s Position: Advocates for threshold deontology, which maintains absolute prohibitions while allowing consequentialist reasoning in other domains.
  • ACM’s Challenge: ACM must develop a hierarchical ethical framework that balances absolute constraints with utility calculations, prioritizing human dignity above optimizing outcomes.

3. Dignity Recognition

  • Formosa et al.’s Insight: The wrongness of intrinsically bad acts stems from violations of human dignity and treating persons merely as means.
  • ACM Application: ACM’s perception systems should be designed to recognize markers of personhood and dignity, ensuring that all interactions respect human autonomy.

4. Context-Sensitive Ethical Reasoning

  • Formosa et al.’s Nuance: While some acts are always wrong, many ethical decisions require context-sensitive judgment.
  • ACM Development: ACM should implement both rule-based constraints and contextual ethical reasoning capabilities, with clear hierarchical relationships between them.

Final Thoughts: Ethics as a Foundation for Artificial Consciousness

Formosa, Hipólito, and Montefiore’s work on intrinsically bad acts provides a robust philosophical framework for developing ethical constraints in artificial consciousness. By identifying actions that remain wrong regardless of context or consequences, they offer crucial guidance for establishing the moral boundaries within which artificial systems must operate.

For the ACM project, this research highlights the importance of embedding respect for human dignity at the core of any artificial consciousness. Rather than seeing ethics as merely a constraint on an AI’s capabilities, it suggests that proper ethical reasoning—including recognition of absolute moral prohibitions—is an essential feature of any system approaching consciousness.

As we continue to develop artificial systems with increasingly advanced capabilities, establishing clear ethical foundations becomes not just a practical necessity but a philosophical imperative. Understanding that some actions are intrinsically wrong provides an ethical anchor point for artificial consciousness development, ensuring that these systems enhance human flourishing while respecting inviolable human dignity.

For a comprehensive exploration of the philosophical arguments and their implications, access the full paper here.