Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

While many focus on OpenAI and Google, significant breakthroughs are happening in China. Alibaba's Quen models are powerful enough to run on a laptop offline, and DeepSeek has developed a self-learning math model, indicating a rapid pace of innovation that Western marketers are overlooking at their peril.

Related Insights

The perception of China's AI industry as a "fast follower" is outdated. Models like ByteDance's SeedDance 2.0 are not just catching up on quality but introducing technical breakthroughs—like simultaneous sound generation—that haven't yet appeared in Western models, signaling a shift to true innovation.

China is gaining an efficiency edge in AI by using "distillation"—training smaller, cheaper models from larger ones. This "train the trainer" approach is much faster and challenges the capital-intensive US strategy, highlighting how inefficient and "bloated" current Western foundational models are.

While US firms lead in cutting-edge AI, the impressive quality of open-source models from China is compressing the market. As these free models improve, more tasks become "good enough" for open source, creating significant pricing pressure on premium, closed-source foundation models from companies like OpenAI and Google.

Joe Tsai reframes the US-China 'AI race' as a marathon won by adoption speed, not model size. He notes China’s focus on open source and smaller, specialized models (e.g., for mobile devices) is designed for faster proliferation and practical application. The goal is to diffuse technology throughout the economy quickly, rather than simply building the single most powerful model.

Unlike the largely closed-source US market, DeepSeek's open-source models spurred intense competition among Chinese tech giants and startups to release their own open offerings. This has made Chinese open-source models the most used globally by token count, creating a distinct competitive dynamic.

Challenging the narrative of pure technological competition, Jensen Huang points out that American AI labs and startups significantly benefited from Chinese open-source contributions like the DeepSeek model. This highlights the global, interconnected nature of AI research, where progress in one nation directly aids others.

The rise of Chinese AI models like DeepSeek and Kimmy in 2025 was driven by the startup and developer communities, not large enterprises. This bottom-up adoption pattern is reshaping the open-source landscape, creating a new competitive dynamic where nimble startups are leveraging these models long before they are vetted by corporate buyers.

Airbnb's reliance on Alibaba's QWEN 3 model as a more affordable alternative to US models signals a critical trend. As Chinese models approach performance parity, their significant cost advantage is making them a viable and attractive choice for Western companies, challenging the market dominance of US-based labs.

While the U.S. leads in closed, proprietary AI models like OpenAI's, Chinese companies now dominate the leaderboards for open-source models. Because they are cheaper and easier to deploy, these Chinese models are seeing rapid global uptake, challenging the U.S.'s perceived lead in AI through wider diffusion and application.

To escape platform risk and high API costs, startups are building their own AI models. The strategy involves taking powerful, state-subsidized open-source models from China and fine-tuning them for specific use cases, creating a competitive alternative to relying on APIs from OpenAI or Anthropic.