The popular scenario of an AI taking control of nuclear arsenals is less plausible than imagined. Nuclear Command, Control, and Communication (NC3) systems are profoundly classified and intentionally analog, precisely to prevent the kind of digital takeover an AI would require.
The common analogy of AI to electricity is dangerously rosy. AI is more like fire: a transformative tool that, if mismanaged or weaponized, can spread uncontrollably with devastating consequences. This mental model better prepares us for AI's inherent risks and accelerating power.
While modernizing nuclear command and control systems seems logical, their current antiquated state offers a paradoxical security benefit. Sam Harris suggests this technological obsolescence makes them less vulnerable to modern hacking techniques, creating an unintentional layer of safety against cyber-initiated launches.
The 'P(doom)' argument is nonsensical because it lacks any plausible mechanism for how an AI could spontaneously gain agency and take over. This fear-mongering distracts from the immediate, tangible dangers of AI: mass production of fake data, political manipulation, and mass hysteria.
The path to surviving superintelligence is political: a global pact to halt its development, mirroring Cold War nuclear strategy. Success hinges on all leaders understanding that anyone building it ensures their own personal destruction, removing any incentive to cheat.
Contrary to fears of digital takeover, the US submarine-launched ballistic missile system is deliberately analog. Its primary navigation method is "star sighting"—an ancient technique—making it resilient to hacking and external digital control, a fusion of primitive and advanced technology for ultimate security.
Security expert Alex Komorowski argues that current AI systems are fundamentally insecure. The lack of a large-scale breach is a temporary illusion created by the early stage of AI integration into critical systems, not a testament to the effectiveness of current defenses.
Public fear focuses on AI hypothetically creating new nuclear weapons. The more immediate danger is militaries trusting highly inaccurate AI systems for critical command and control decisions over existing nuclear arsenals, where even a small error rate could be catastrophic.
Contrary to popular belief, military procurement involves some of the most rigorous safety and reliability testing. Current generative AI models, with their inherent high error rates, fall far short of these established thresholds that have long been required for defense systems.
The AI safety community fears losing control of AI. However, achieving perfect control of a superintelligence is equally dangerous. It grants godlike power to flawed, unwise humans. A perfectly obedient super-tool serving a fallible master is just as catastrophic as a rogue agent.
International AI treaties are feasible. Just as nuclear arms control monitors uranium and plutonium, AI governance can monitor the choke point for advanced AI: high-end compute chips from companies like NVIDIA. Tracking the global distribution of these chips could verify compliance with development limits.