According to DeepMind CEO Demis Hassabis, while Chinese AI models are rapidly closing the capability gap with US counterparts, they have yet to demonstrate the ability to create truly novel breakthroughs, like a new transformer architecture. Their strength lies in catching up to the frontier, not pushing beyond it.

Related Insights

Demis Hassabis states that while current AI capabilities are somewhat overhyped due to fundraising pressures on startups, the medium- to long-term transformative impact of the technology is still deeply underappreciated. This creates a disconnect between market perception and true potential.

China is gaining an efficiency edge in AI by using "distillation"—training smaller, cheaper models from larger ones. This "train the trainer" approach is much faster and challenges the capital-intensive US strategy, highlighting how inefficient and "bloated" current Western foundational models are.

Challenging the narrative of pure technological competition, Jensen Huang points out that American AI labs and startups significantly benefited from Chinese open-source contributions like the DeepSeek model. This highlights the global, interconnected nature of AI research, where progress in one nation directly aids others.

Marc Andreessen observes that once a company demonstrates a new AI capability is possible, competitors can catch up rapidly. This suggests that first-mover advantage in AI might be less durable than in previous tech waves, as seen with companies like XAI matching state-of-the-art models in under a year.

Demis Hassabis explains that current AI models have 'jagged intelligence'—performing at a PhD level on some tasks but failing at high-school level logic on others. He identifies this lack of consistency as a primary obstacle to achieving true Artificial General Intelligence (AGI).

Despite strong benchmark scores, top Chinese AI models (from ZAI, Kimi, DeepSeek) are "nowhere close" to US models like Claude or Gemini on complex, real-world vision tasks, such as accurately reading a messy scanned document. This suggests benchmarks don't capture a significant real-world performance gap.

Contrary to the prevailing 'scaling laws' narrative, leaders at Z.AI believe that simply adding more data and compute to current Transformer architectures yields diminishing returns. They operate under the conviction that a fundamental performance 'wall' exists, necessitating research into new architectures for the next leap in capability.

While the US focuses on creating the most advanced AI models, China's real strength may be its proven ability to orchestrate society-wide technology adoption. Deep integration and widespread public enthusiasm for AI could ultimately provide a more durable competitive advantage.

The US-China AI race is a 'game of inches.' While America leads in conceptual breakthroughs, China excels at rapid implementation and scaling. This dynamic reduces any American advantage to a matter of months, requiring constant, fast-paced innovation to maintain leadership.

While the U.S. leads in closed, proprietary AI models like OpenAI's, Chinese companies now dominate the leaderboards for open-source models. Because they are cheaper and easier to deploy, these Chinese models are seeing rapid global uptake, challenging the U.S.'s perceived lead in AI through wider diffusion and application.