The doctrine of mutually assured destruction (MAD) relies on the threat of retaliation. However, once an enemy's nuclear missiles are in the air, that threat has failed. Sam Harris argues that launching a counter-strike at that point serves no strategic purpose and is a morally insane act of mass murder.
While modernizing nuclear command and control systems seems logical, their current antiquated state offers a paradoxical security benefit. Sam Harris suggests this technological obsolescence makes them less vulnerable to modern hacking techniques, creating an unintentional layer of safety against cyber-initiated launches.
Claiming you will only 'turn down the temperature' after your opponents do is not a strategy for de-escalation; it is a justification for retaliation. This 'counter-punching' approach ensures conflict continues. A genuine desire to reduce societal tension requires leading by example, not waiting for the other side to act first.
The path to surviving superintelligence is political: a global pact to halt its development, mirroring Cold War nuclear strategy. Success hinges on all leaders understanding that anyone building it ensures their own personal destruction, removing any incentive to cheat.
PGIM's Daleep Singh argues that the risk of mutually assured destruction prevents direct military conflict between nuclear powers. This channels confrontation into the economic sphere, using tools like sanctions and trade policy as primary weapons of statecraft.
In global conflicts, a nation's power dictates its actions and outcomes, not moral righteousness. History shows powerful nations, like the U.S. using nuclear weapons, operate beyond conventional moral constraints, making an understanding of power dynamics more critical than moralizing.
Common thought experiments attacking consequentialism (e.g., a doctor sacrificing one patient for five) are flawed because they ignore the full scope of consequences. A true consequentialist analysis would account for the disastrous societal impacts, such as the erosion of trust in medicine, which would make the act clearly wrong.
Public fear focuses on AI hypothetically creating new nuclear weapons. The more immediate danger is militaries trusting highly inaccurate AI systems for critical command and control decisions over existing nuclear arsenals, where even a small error rate could be catastrophic.
The controversy surrounding a second drone strike to eliminate survivors highlights a flawed moral calculus. Public objection focuses on the *inefficiency* of the first strike, not the lethal action itself. This inconsistent reasoning avoids the fundamental ethical question of whether the strike was justified in the first place.
Arguments against consequentialism, like the surgeon who kills one healthy patient to save five with his organs, often fail by defining "consequences" too narrowly. A stronger consequentialist view argues such acts are wrong because they consider all ripple effects, including the catastrophic collapse of trust in the medical system, which would cause far more harm.
Countering the common narrative, Anduril views AI in defense as the next step in Just War Theory. The goal is to enhance accuracy, reduce collateral damage, and take soldiers out of harm's way. This continues a historical military trend away from indiscriminate lethality towards surgical precision.