International AI treaties, particularly with nations like China, are unlikely to hold based on trust alone. A stable agreement requires a mutually-assured-destruction-style dynamic, meaning the U.S. must develop and signal credible offensive capabilities to deter cheating.

Related Insights

For a blueprint on AI governance, look to Cold War-era geopolitics, not just tech history. The 1967 UN Outer Space Treaty, which established cooperation between the US and Soviet Union, shows that global compromise on new frontiers is possible even amidst intense rivalry. It provides a model for political, not just technical, solutions.

The path to surviving superintelligence is political: a global pact to halt its development, mirroring Cold War nuclear strategy. Success hinges on all leaders understanding that anyone building it ensures their own personal destruction, removing any incentive to cheat.

The idea of nations collectively creating policies to slow AI development for safety is naive. Game theory dictates that the immense competitive advantage of achieving AGI first will drive nations and companies to race ahead, making any global regulatory agreement effectively unenforceable.

The same governments pushing AI competition for a strategic edge may be forced into cooperation. As AI democratizes access to catastrophic weapons (CBRN), the national security risk will become so great that even rival superpowers will have a mutual incentive to create verifiable safety treaties.

Dario Amodei frames AI chip export controls not as a permanent blockade, but as a strategic play for leverage. The goal is to ensure that when the world eventually negotiates the "rules of the road" for the post-AGI era, democratic nations are in a stronger bargaining position relative to authoritarian states like China.

The development of AI won't stop because of game theory. For competing nations like the US and China, the risk of falling behind is greater than the collective risk of developing the technology. This dynamic makes the AI race an unstoppable force, mirroring the Cold War nuclear arms race and rendering calls for a pause futile.

The belief that AI development is unstoppable ignores history. Global treaties successfully limited nuclear proliferation, phased out ozone-depleting CFCs, and banned blinding lasers. These precedents prove that coordinated international action can steer powerful technologies away from the worst outcomes.

Anthropic CEO Dario Amodei's writing proposes using an AI advantage to 'make China an offer they can't refuse,' forcing them to abandon competition with democracies. The host argues this is an extremely reckless position that fuels an arms race dynamic, especially when other leaders like Google's Demis Hassabis consistently call for international collaboration.

While making powerful AI open-source creates risks from rogue actors, it is preferable to centralized control by a single entity. Widespread access acts as a deterrent based on mutually assured destruction, preventing any one group from using AI as a tool for absolute power.

International AI treaties are feasible. Just as nuclear arms control monitors uranium and plutonium, AI governance can monitor the choke point for advanced AI: high-end compute chips from companies like NVIDIA. Tracking the global distribution of these chips could verify compliance with development limits.