The CEO of Excelsius argues the traditionally conservative data center sector is ill-prepared for the non-linear innovation demanded by AI. He warns that operators, struggling to keep up, may make "bad decisions" like adopting inadequate single-phase water cooling instead of future-proof two-phase liquid cooling technologies.
The standard for measuring large compute deals has shifted from number of GPUs to gigawatts of power. This provides a normalized, apples-to-apples comparison across different chip generations and manufacturers, acknowledging that energy is the primary bottleneck for building AI data centers.
The primary bottleneck for scaling AI over the next decade may be the difficulty of bringing gigawatt-scale power online to support data centers. Smart money is already focused on this challenge, which is more complex than silicon supply.
Poolside, an AI coding company, building its own data center is a terrifying signal for the industry. It suggests that competing at the software layer now requires massive, direct investment in fixed assets. This escalates the capital intensity of AI startups from millions to potentially billions, fundamentally changing the investment landscape.
The massive investment in data centers isn't just a bet on today's models. As AI becomes more efficient, smaller yet powerful models will be deployed on older hardware. This extends the serviceable life and economic return of current infrastructure, ensuring today's data centers will still generate value years from now.
Hyperscalers face a strategic challenge: building massive data centers with current chips (e.g., H100) risks rapid depreciation as far more efficient chips (e.g., GB200) are imminent. This creates a 'pause' as they balance fulfilling current demand against future-proofing their costly infrastructure.
According to Poolside's CEO, the primary constraint in scaling AI is not chips or energy, but the 18-24 month lead time for building powered data centers. Poolside's strategy is to vertically integrate by manufacturing modular electrical, cooling, and compute 'skids' off-site, which can be trucked in and deployed incrementally.
While many focus on physical infrastructure like liquid cooling, CoreWeave's true differentiator is its proprietary software stack. This software manages the entire data center, from power to GPUs, using predictive analytics to gracefully handle component failures and maximize performance for customers' critical AI jobs.
Satya Nadella clarifies that the primary constraint on scaling AI compute is not the availability of GPUs, but the lack of power and physical data center infrastructure ("warm shelves") to install them. This highlights a critical, often overlooked dependency in the AI race: energy and real estate development speed.
The infrastructure demands of AI have caused an exponential increase in data center scale. Two years ago, a 1-megawatt facility was considered a good size. Today, a large AI data center is a 1-gigawatt facility—a 1000-fold increase. This rapid escalation underscores the immense and expensive capital investment required to power AI.
The astronomical power and cooling needs of AI are pushing major players like SpaceX, Amazon, and Google toward space-based data centers. These leverage constant, intense solar power and near-absolute zero temperatures for cooling, solving the biggest physical limitations of scaling AI on Earth.