As single data centers hit power limits, AI training clusters are expanding across locations hundreds of kilometers apart. This "scale across" model creates a new engineering challenge: preventing packet loss, which can ruin expensive training runs. The solution lies in silicon-level innovations like deep buffering to maintain coherence over long distances.

Related Insights

The U.S. has plenty of power for the AI boom, but it's in the wrong places—far from existing data centers, fiber networks, and population centers. The critical challenge is not generation capacity but rather bridging the geographical gap between where power is abundant and where it is needed.

Contrary to the common focus on chip manufacturing, the immediate bottleneck for building new AI data centers is energy. Factors like power availability, grid interconnects, and high-voltage equipment are the true constraints, forcing companies to explore solutions like on-site power generation.

The limiting factor for large-scale AI compute is no longer physical space but the availability of electrical power. As a result, the industry now sizes and discusses data center capacity and deals in terms of megawatts, reflecting the primary constraint on growth.

When splitting jobs across thousands of GPUs, inconsistent communication times (jitter) create bottlenecks, forcing the use of fewer GPUs. A network with predictable, uniform latency enables far greater parallelization and overall cluster efficiency, making it more important than raw 'hero number' bandwidth.

Pre-training requires constant, high-bandwidth weight synchronization, making it difficult across data centers. Newer Reinforcement Learning (RL) methods mostly do local forward passes to generate data, only sending back small amounts of verified data, making distributed training more practical.

According to Arista's CEO, the primary constraint on building AI infrastructure is the massive power consumption of GPUs and networks. Finding data center locations with gigawatts of available power can take 3-5 years, making energy access, not technology, the main limiting factor for industry growth.

The primary constraint on the AI boom is not chips or capital, but aging physical infrastructure. In Santa Clara, NVIDIA's hometown, fully constructed data centers are sitting empty for years simply because the local utility cannot supply enough electricity. This highlights how the pace of AI development is ultimately tethered to the physical world's limitations.

Microsoft's new data centers, like Fairwater 2, are designed for massive scale. They use high-speed networking to aggregate computing power across different sites and even regions (e.g., Atlanta and Wisconsin), enabling training of unprecedentedly large models on a single job.

As hyperscalers build massive new data centers for AI, the critical constraint is shifting from semiconductor supply to energy availability. The core challenge becomes sourcing enough power, raising new geopolitical and environmental questions that will define the next phase of the AI race.

Unlike rivals building massive, centralized campuses, Google leverages its advanced proprietary fiber networks to train single AI models across multiple, smaller data centers. This provides greater flexibility in site selection and resource allocation, creating a durable competitive edge in AI infrastructure.