While costs for essentials like copper and electricity are rising, cash-rich hyperscalers (Google, Meta) will continue building. The real pressure will be on smaller, capital-dependent players like CoreWeave, who may struggle to secure financing as investors scrutinize returns, leading to canceled projects on the margin.
While AI chips represent the bulk of a data center's cost ($20-25M/MW), the remaining $10 million per megawatt for essentials like powered land, construction, and capital goods is where real bottlenecks lie. This 'picks and shovels' segment faces significant supply shortages and is considered a less speculative investment area with no bubble.
Instead of bearing the full cost and risk of building new AI data centers, large cloud providers like Microsoft use CoreWeave for 'overflow' compute. This allows them to meet surges in customer demand without committing capital to assets that depreciate quickly and may become competitors' infrastructure in the long run.
Despite staggering announcements for new AI data centers, a primary limiting factor will be the availability of electrical power. The current growth curve of the power infrastructure cannot support all the announced plans, creating a physical bottleneck that will likely lead to project failures and investment "carnage."
Poolside, an AI coding company, building its own data center is a terrifying signal for the industry. It suggests that competing at the software layer now requires massive, direct investment in fixed assets. This escalates the capital intensity of AI startups from millions to potentially billions, fundamentally changing the investment landscape.
The trend of tech giants investing cloud credits into AI startups, which then spend it back on their cloud, faces a critical physical bottleneck. An analyst warns that expected delays in data center construction could cause this entire multi-billion dollar financing model to "come crashing down."
Contrary to the common focus on chip manufacturing, the immediate bottleneck for building new AI data centers is energy. Factors like power availability, grid interconnects, and high-voltage equipment are the true constraints, forcing companies to explore solutions like on-site power generation.
The huge CapEx required for GPUs is fundamentally changing the business model of tech hyperscalers like Google and Meta. For the first time, they are becoming capital-intensive businesses, with spending that can outstrip operating cash flow. This shifts their financial profile from high-margin software to one more closely resembling industrial manufacturing.
The primary constraint for scaling high-frequency trading operations has shifted from minimizing latency (e.g., shorter wires) to securing electricity. Even for a firm like Hudson River Trading, which is smaller than tech giants, negotiating for power grid access is the main bottleneck for building new GPU data centers.
Overwhelmed by speculative demand from the AI boom, power companies are now requiring massive upfront payments and long-term commitments. For example, Georgia Power demands a $600 million deposit for a 500-megawatt request, creating a high barrier to entry and filtering out less viable projects.
As hyperscalers build massive new data centers for AI, the critical constraint is shifting from semiconductor supply to energy availability. The core challenge becomes sourcing enough power, raising new geopolitical and environmental questions that will define the next phase of the AI race.