Challenging the narrative of pure technological competition, Jensen Huang points out that American AI labs and startups significantly benefited from Chinese open-source contributions like the DeepSeek model. This highlights the global, interconnected nature of AI research, where progress in one nation directly aids others.
Joe Tsai reframes the US-China 'AI race' as a marathon won by adoption speed, not model size. He notes China’s focus on open source and smaller, specialized models (e.g., for mobile devices) is designed for faster proliferation and practical application. The goal is to diffuse technology throughout the economy quickly, rather than simply building the single most powerful model.
The emergence of high-quality open-source models from China drastically shortens the innovation window of closed-source leaders. This competition is healthy for startups, providing them with a broader array of cheaper, powerful models to build on and preventing a single company from becoming a chokepoint.
Counterintuitively, China leads in open-source AI models as a deliberate strategy. This approach allows them to attract global developer talent to accelerate their progress. It also serves to commoditize software, which complements their national strength in hardware manufacturing, a classic competitive tactic.
The rise of Chinese AI models like DeepSeek and Kimmy in 2025 was driven by the startup and developer communities, not large enterprises. This bottom-up adoption pattern is reshaping the open-source landscape, creating a new competitive dynamic where nimble startups are leveraging these models long before they are vetted by corporate buyers.
Unable to compete globally on inference-as-a-service due to US chip sanctions, China has pivoted to releasing top-tier open-source models. This serves as a powerful soft power play, appealing to other nations and building a technological sphere of influence independent of the US.
The emergence of high-quality, open-source AI models from China (like Kimi and DeepSeek) has shifted the conversation in Washington D.C. It reframes AI development from a domestic regulatory risk to a geopolitical foot race, reducing the appetite for restrictive legislation that could cede leadership to China.
The initial fear around DeepSeq was about China surpassing US AI capabilities. The lasting, more subtle impact is that it broke a psychological barrier, making it commonplace for American developers and companies to adopt and build upon powerful open-source models originating from China.
Framing the US-China AI dynamic as a zero-sum race is inaccurate. The reality is a complex 'coopetition' where both sides compete, cooperate on research, and actively co-opt each other's open-weight models to accelerate their own development, creating deep interdependencies.
While the U.S. leads in closed, proprietary AI models like OpenAI's, Chinese companies now dominate the leaderboards for open-source models. Because they are cheaper and easier to deploy, these Chinese models are seeing rapid global uptake, challenging the U.S.'s perceived lead in AI through wider diffusion and application.
To escape platform risk and high API costs, startups are building their own AI models. The strategy involves taking powerful, state-subsidized open-source models from China and fine-tuning them for specific use cases, creating a competitive alternative to relying on APIs from OpenAI or Anthropic.