Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Despite intense technological competition, both the U.S. and China face a common threat from non-state actors like terrorist or criminal groups acquiring powerful AI models. This shared vulnerability presents a potential opportunity for cooperation on AI regulation and safeguards, even amid broader strategic rivalry.

Related Insights

While U.S. advocates for AI cooperation with China often feel they are in a marginalized minority fighting a hawkish narrative, their counterparts in China feel their position is mainstream. Chinese academia, industry, and think tanks broadly view international governance collaboration as a priority, not just an acceptable option.

Leading AI labs, despite intense competition, are collaborating through the Frontier Model Forum to detect and prevent Chinese firms from creating imitation models. This rare alliance is driven by the shared existential threat that 'adversarial distillation' poses to their business models and to U.S. national security.

In the race for AGI, framing the primary conflict as US vs. China is a mistake. The true "aliens" are the AIs, which are fundamentally different from any human culture. We have far more in common with our fellow humans, even rivals, and should prioritize cooperation with them over racing to build uncontrollable systems.

The notion that tough export controls deny diplomatic space for AI risk discussions with China is a "mental model error." The Biden administration proved it's possible to compete vigorously by implementing chip restrictions while simultaneously engaging in government-to-government dialogue on AI-enabled nuclear risk.

The same governments pushing AI competition for a strategic edge may be forced into cooperation. As AI democratizes access to catastrophic weapons (CBRN), the national security risk will become so great that even rival superpowers will have a mutual incentive to create verifiable safety treaties.

Despite intense domestic rivalry, top US AI labs like OpenAI, Anthropic, and Google are collaborating to detect "adversarial distillation"—where Chinese firms copy their models. This rare cooperation shows the shared commercial and national security threat from foreign competitors outweighs their direct competition.

Despite being fierce competitors, major AI labs work together behind the scenes. They share intelligence on suspicious API usage from shell companies to identify and thwart large-scale, coordinated distillation attacks from foreign adversaries, which might otherwise go undetected by a single lab.

The emergence of high-quality, open-source AI models from China (like Kimi and DeepSeek) has shifted the conversation in Washington D.C. It reframes AI development from a domestic regulatory risk to a geopolitical foot race, reducing the appetite for restrictive legislation that could cede leadership to China.

Framing the US-China AI dynamic as a zero-sum race is inaccurate. The reality is a complex 'coopetition' where both sides compete, cooperate on research, and actively co-opt each other's open-weight models to accelerate their own development, creating deep interdependencies.

The AI competition is not a race to develop the most powerful technology, but a race to see which nation is better at steering and governing that power. Developing an uncontrollable 'AI bazooka' first is not a win; true advantage comes from creating systems that strengthen, rather than weaken, one's own society.