The 2020 research formalizing AI's "scaling laws" was the key turning point for policymakers. It provided mathematical proof that AI capabilities scaled predictably with computing power, solidifying the conviction that compute, not data, was the critical resource to control in U.S.-China competition.

Related Insights

A 10x increase in compute may only yield a one-tier improvement in model performance. This appears inefficient but can be the difference between a useless "6-year-old" intelligence and a highly valuable "16-year-old" intelligence, unlocking entirely new economic applications.

The progression from early neural networks to today's massive models is fundamentally driven by the exponential increase in available computational power, from the initial move to GPUs to today's million-fold increases in training capacity on a single model.

While the US currently leads in AI with superior chips, China's state-controlled power grid is growing 10x faster and can be directed towards AI data centers. This creates a scenario where if AGI is a short-term race, the US wins. If it's a long-term build-out, China's superior energy infrastructure could be the deciding factor.

The primary bottleneck for scaling AI over the next decade may be the difficulty of bringing gigawatt-scale power online to support data centers. Smart money is already focused on this challenge, which is more complex than silicon supply.

The conversation around AI and government has evolved past regulation. Now, the immense demand for power and hardware to fuel AI development directly influences international policy, resource competition, and even provides justification for military actions, making AI a core driver of geopolitics.

Beyond algorithms and talent, China's key advantage in the AI race is its massive investment in energy infrastructure. While the U.S. grid struggles, China is adding 10x more solar capacity and building 33 nuclear plants, ensuring it will have the immense power required to train and run future AI models at scale.

A nation's advantage is its "intelligent capital stock": its total GPU compute power multiplied by the quality of its AI models. This explains the US restricting GPU sales to China, which counters by excelling in open-source models to close the gap.

For entire countries or industries, aggregate compute power is the primary constraint on AI progress. However, for individual organizations, success hinges not on having the most capital for compute, but on the strategic wisdom to select the right research bets and build a culture that sustains them.

AI's computational needs are not just from initial training. They compound exponentially due to post-training (reinforcement learning) and inference (multi-step reasoning), creating a much larger demand profile than previously understood and driving a billion-X increase in compute.

As hyperscalers build massive new data centers for AI, the critical constraint is shifting from semiconductor supply to energy availability. The core challenge becomes sourcing enough power, raising new geopolitical and environmental questions that will define the next phase of the AI race.