/
© 2026 RiffOn. All rights reserved.

Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

  1. 80,000 Hours Podcast
  2. AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani
AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

80,000 Hours Podcast · Mar 10, 2026

AI will likely not end Mutually Assured Destruction. Achieving a perfect 'splendid first strike' remains nearly impossible due to physics and countermeasures.

An Attacker Needs Near-Perfect Certainty to Gamble on a First Strike

The catastrophic consequence of even a single nuclear submarine escaping a first strike creates an incredibly high burden of proof. An attacker must be virtually 100% confident in eliminating all retaliatory forces simultaneously, a level of certainty that is practically unattainable.

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani thumbnail

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

80,000 Hours Podcast·5 days ago

Overwhelming Technological Superiority Does Not Guarantee Political Dominance

History shows that technological advantage is not a silver bullet for achieving political goals. The US possessed massive technological dominance over adversaries in Vietnam and Afghanistan but ultimately failed to impose its will, suggesting an AI leader could face similar limitations.

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani thumbnail

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

80,000 Hours Podcast·5 days ago

Invisible Software Breakthroughs Pose a Greater Threat Than Visible Hardware Advances

Building massive sensor networks or missile defense systems is physically observable, giving adversaries time to develop countermeasures. In contrast, a sudden leap in AI-enabled intelligence processing can be invisible, creating a surprise window of vulnerability with no warning.

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani thumbnail

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

80,000 Hours Podcast·5 days ago

Bureaucratic Inertia Provides a Natural Brake on Exploiting Rapid AI Breakthroughs

Even if AI technology advances overnight, a state's ability to act on it is slowed by institutional factors. The need for testing, updating military doctrine, and securing political approval for a high-stakes action means that institutional adaptation will always lag technological progress.

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani thumbnail

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

80,000 Hours Podcast·5 days ago

The AI and Nuclear Communities' Mutual Ignorance Creates a Major National Security Blind Spot

AI experts who understand emerging technologies lack deep knowledge of nuclear deterrence strategy. Conversely, the nuclear policy community is not fully versed in frontier AI. This knowledge gap hinders accurate risk assessment and the development of sound policy.

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani thumbnail

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

80,000 Hours Podcast·5 days ago

Nuclear Weapons Shift Competition From a 'Balance of Power' to a 'Balance of Nerves'

In a world with nuclear weapons, conflicts between major powers are determined less by economic or military might and more by which side demonstrates greater resolve and willingness to risk escalation. This dynamic places an upper bound on how much one state can coerce another.

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani thumbnail

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

80,000 Hours Podcast·5 days ago

A 'Splendid First Strike' Capability Can Never Be Reliably Tested in Advance

A state cannot test its systems for eliminating an adversary's entire nuclear arsenal without the test itself being mistaken for the start of a real war. This inability to rehearse creates fundamental, irreducible uncertainty about the plan's effectiveness for any potential attacker.

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani thumbnail

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

80,000 Hours Podcast·5 days ago

Automated 'Dead Hand' Systems Make Decapitation Strikes on Leadership Ineffective

Even if an attacker successfully destroys an adversary's entire command and control structure, retaliation is not prevented. Failsafe systems like Russia's 'Perimeter' or the UK's 'letters of last resort' are designed to automatically trigger a nuclear response, ensuring a second strike still occurs.

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani thumbnail

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

80,000 Hours Podcast·5 days ago

Simple, Low-Tech Defenses Can Effectively Neutralize Advanced AI Surveillance Systems

To maintain a second-strike capability, a country doesn't need equally advanced AI. Low-tech countermeasures like decoys, covering roads with netting, or simply moving missile launchers more frequently can create enough uncertainty to thwart a sophisticated, AI-driven first strike.

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani thumbnail

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

80,000 Hours Podcast·5 days ago