Arguments against consequentialism, like the surgeon who kills one healthy patient to save five with his organs, often fail by defining "consequences" too narrowly. A stronger consequentialist view argues such acts are wrong because they consider all ripple effects, including the catastrophic collapse of trust in the medical system, which would cause far more harm.

Related Insights

Deontological (rule-based) ethics are often implicitly justified by the good outcomes their rules are presumed to create. If a moral rule was known to produce the worst possible results, its proponents would likely abandon it, revealing a hidden consequentialist foundation for their beliefs.

Common thought experiments attacking consequentialism (e.g., a doctor sacrificing one patient for five) are flawed because they ignore the full scope of consequences. A true consequentialist analysis would account for the disastrous societal impacts, such as the erosion of trust in medicine, which would make the act clearly wrong.

The famous Trolley Problem isn't just one scenario. Philosophers create subtle variations, like replacing the act of pushing a person with flipping a switch to drop them through a trapdoor. This isolates variables and reveals that our moral objection isn't just about physical contact, but about intentionally using a person as an instrument to achieve a goal.

We accept 40,000 annual US traffic deaths as a cost of convenience, yet a policy change like lowering speed limits could save thousands of lives. This reveals a deep inconsistency in our moral framework: we are apathetic to large-scale, statistical risks but would be horrified by a single, identifiable act causing a fraction of the harm. The lack of an identifiable victim neutralizes our moral intuition.

Instead of relying on instinctual "System 1" rules, advanced AI should use deliberative "System 2" reasoning. By analyzing consequences and applying ethical frameworks—a process called "chain of thought monitoring"—AIs could potentially become more consistently ethical than humans who are prone to gut reactions.

Evaluate political ideologies based on their historical potential for large-scale harm ("amplitude"), not just a leader's current negative actions. A socialist path, historically leading to mass death, may pose a greater long-term threat than a leader's immediate, but less catastrophic, authoritarian tendencies.

Grisham's most pragmatic argument against the death penalty isn't moral but systemic: Texas has exonerated 18 people from death row. He argues that even if one supports the penalty in principle, one cannot support a system proven to make catastrophic errors. This "flawed system" framework is a powerful way to debate high-risk policies.

The core reason we treat the Trolley Problem's two scenarios differently lies in the distinction between intending harm versus merely foreseeing it. Pushing the man means you *intend* for him to block the train (using him as a means). Flipping the switch means you *foresee* a death as a side effect. This principle, known as the doctrine of double effect, is a cornerstone of military and medical ethics.

Even if one rejects hedonism—the idea that happiness is the only thing that matters—any viable ethical framework must still consider happiness and suffering as central. To argue otherwise is to claim that human misery is morally irrelevant in and of itself, a deeply peculiar and counter-intuitive position.

Thought experiments like the trolley problem artificially constrain choices to derive a specific intuition. They posit perfect knowledge and ignore the most human response: attempting to find a third option, like breaking the trolley, that avoids the forced choice entirely.