The catastrophic consequence of even a single nuclear submarine escaping a first strike creates an incredibly high burden of proof. An attacker must be virtually 100% confident in eliminating all retaliatory forces simultaneously, a level of certainty that is practically unattainable.
History shows that technological advantage is not a silver bullet for achieving political goals. The US possessed massive technological dominance over adversaries in Vietnam and Afghanistan but ultimately failed to impose its will, suggesting an AI leader could face similar limitations.
Building massive sensor networks or missile defense systems is physically observable, giving adversaries time to develop countermeasures. In contrast, a sudden leap in AI-enabled intelligence processing can be invisible, creating a surprise window of vulnerability with no warning.
Even if AI technology advances overnight, a state's ability to act on it is slowed by institutional factors. The need for testing, updating military doctrine, and securing political approval for a high-stakes action means that institutional adaptation will always lag technological progress.
AI experts who understand emerging technologies lack deep knowledge of nuclear deterrence strategy. Conversely, the nuclear policy community is not fully versed in frontier AI. This knowledge gap hinders accurate risk assessment and the development of sound policy.
In a world with nuclear weapons, conflicts between major powers are determined less by economic or military might and more by which side demonstrates greater resolve and willingness to risk escalation. This dynamic places an upper bound on how much one state can coerce another.
A state cannot test its systems for eliminating an adversary's entire nuclear arsenal without the test itself being mistaken for the start of a real war. This inability to rehearse creates fundamental, irreducible uncertainty about the plan's effectiveness for any potential attacker.
Even if an attacker successfully destroys an adversary's entire command and control structure, retaliation is not prevented. Failsafe systems like Russia's 'Perimeter' or the UK's 'letters of last resort' are designed to automatically trigger a nuclear response, ensuring a second strike still occurs.
To maintain a second-strike capability, a country doesn't need equally advanced AI. Low-tech countermeasures like decoys, covering roads with netting, or simply moving missile launchers more frequently can create enough uncertainty to thwart a sophisticated, AI-driven first strike.
