We accept 40,000 annual US traffic deaths as a cost of convenience, yet a policy change like lowering speed limits could save thousands of lives. This reveals a deep inconsistency in our moral framework: we are apathetic to large-scale, statistical risks but would be horrified by a single, identifiable act causing a fraction of the harm. The lack of an identifiable victim neutralizes our moral intuition.
People often object to AI's energy use simply because it represents a *new* source of emissions. This psychological bias distracts from the fact that these new emissions are minuscule compared to massive, existing sources like personal transportation.
We confuse our capacity for innovation with wisdom, but we are not wise by default. The same mind that conceives of evolution can rationalize slavery, the Holocaust, and cruelty to animals. Our psychology is masterful at justification, making our default state far from conscious or wise.
Common thought experiments attacking consequentialism (e.g., a doctor sacrificing one patient for five) are flawed because they ignore the full scope of consequences. A true consequentialist analysis would account for the disastrous societal impacts, such as the erosion of trust in medicine, which would make the act clearly wrong.
The famous Trolley Problem isn't just one scenario. Philosophers create subtle variations, like replacing the act of pushing a person with flipping a switch to drop them through a trapdoor. This isolates variables and reveals that our moral objection isn't just about physical contact, but about intentionally using a person as an instrument to achieve a goal.
A technology like Waymo's self-driving cars could be statistically safer than human drivers yet still be rejected by the public. Society is unwilling to accept thousands of deaths directly caused by a single corporate algorithm, even if it represents a net improvement over the chaotic, decentralized risk of human drivers.
The controversy surrounding a second drone strike to eliminate survivors highlights a flawed moral calculus. Public objection focuses on the *inefficiency* of the first strike, not the lethal action itself. This inconsistent reasoning avoids the fundamental ethical question of whether the strike was justified in the first place.
Arguments against consequentialism, like the surgeon who kills one healthy patient to save five with his organs, often fail by defining "consequences" too narrowly. A stronger consequentialist view argues such acts are wrong because they consider all ripple effects, including the catastrophic collapse of trust in the medical system, which would cause far more harm.
The public holds new technologies to a much higher safety standard than human performance. Waymo could deploy cars that are statistically safer than human drivers, but society would not accept them killing tens of thousands of people annually, even if it's an improvement. This demonstrates the need for near-perfection in high-stakes tech launches.
The core reason we treat the Trolley Problem's two scenarios differently lies in the distinction between intending harm versus merely foreseeing it. Pushing the man means you *intend* for him to block the train (using him as a means). Flipping the switch means you *foresee* a death as a side effect. This principle, known as the doctrine of double effect, is a cornerstone of military and medical ethics.
The lack of widespread outrage after a Waymo vehicle killed a beloved cat in tech-skeptical San Francisco is a telling sign. It suggests society is crossing an acceptance threshold for autonomous technology, implicitly acknowledging that while imperfect, the path to fewer accidents overall involves tolerating isolated, non-human incidents.