International AI treaties are feasible. Just as nuclear arms control monitors uranium and plutonium, AI governance can monitor the choke point for advanced AI: high-end compute chips from companies like NVIDIA. Tracking the global distribution of these chips could verify compliance with development limits.

Related Insights

The decision to allow NVIDIA to sell powerful AI chips to China has a counterintuitive goal. The administration believes that by supplying China, it can "take the air out" of the country's own efforts to build a self-sufficient AI chip ecosystem, thereby hindering domestic firms like Huawei.

For a blueprint on AI governance, look to Cold War-era geopolitics, not just tech history. The 1967 UN Outer Space Treaty, which established cooperation between the US and Soviet Union, shows that global compromise on new frontiers is possible even amidst intense rivalry. It provides a model for political, not just technical, solutions.

The same governments pushing AI competition for a strategic edge may be forced into cooperation. As AI democratizes access to catastrophic weapons (CBRN), the national security risk will become so great that even rival superpowers will have a mutual incentive to create verifiable safety treaties.

The development of AI won't stop because of game theory. For competing nations like the US and China, the risk of falling behind is greater than the collective risk of developing the technology. This dynamic makes the AI race an unstoppable force, mirroring the Cold War nuclear arms race and rendering calls for a pause futile.

If NVIDIA's CEO truly believed AGI was imminent, the most rational action would be to hoard his company's chips to build it himself. His current strategy of selling this critical resource to all players is the strongest evidence that he does not believe in a near-term superintelligence breakthrough.

Instead of trying to legally define and ban 'superintelligence,' a more practical approach is to prohibit specific, catastrophic outcomes like overthrowing the government. This shifts the burden of proof to AI developers, forcing them to demonstrate their systems cannot cause these predefined harms, sidestepping definitional debates.

Unable to compete globally on inference-as-a-service due to US chip sanctions, China has pivoted to releasing top-tier open-source models. This serves as a powerful soft power play, appealing to other nations and building a technological sphere of influence independent of the US.

A nation's advantage is its "intelligent capital stock": its total GPU compute power multiplied by the quality of its AI models. This explains the US restricting GPU sales to China, which counters by excelling in open-source models to close the gap.

The belief that AI development is unstoppable ignores history. Global treaties successfully limited nuclear proliferation, phased out ozone-depleting CFCs, and banned blinding lasers. These precedents prove that coordinated international action can steer powerful technologies away from the worst outcomes.

The history of nuclear power, where regulation transformed an exponential growth curve into a flat S-curve, serves as a powerful warning for AI. This suggests that AI's biggest long-term hurdle may not be technical limits but regulatory intervention that stifles its potential for a "fast takeoff," effectively regulating it out of rapid adoption.