Building on-premise GPU infrastructure for biotech AI is a capital trap. The hardware becomes redundant within five years, turning a multi-million dollar investment into a sunk cost. Cloud providers offer necessary "burst capacity" for intensive workloads without the long-term capital risk, maintenance burden, and inflexibility.

Related Insights

The primary bear case for specialized neoclouds like CoreWeave isn't just competition from AWS or Google. A more fundamental risk is a breakthrough in GPU efficiency that commoditizes deployment, diminishing the value of the neoclouds' core competency in complex, optimized racking and setup.

Instead of bearing the full cost and risk of building new AI data centers, large cloud providers like Microsoft use CoreWeave for 'overflow' compute. This allows them to meet surges in customer demand without committing capital to assets that depreciate quickly and may become competitors' infrastructure in the long run.

A primary risk for major AI infrastructure investments is not just competition, but rapidly falling inference costs. As models become efficient enough to run on cheaper hardware, the economic justification for massive, multi-billion dollar investments in complex, high-end GPU clusters could be undermined, stranding capital.

The massive investment in data centers isn't just a bet on today's models. As AI becomes more efficient, smaller yet powerful models will be deployed on older hardware. This extends the serviceable life and economic return of current infrastructure, ensuring today's data centers will still generate value years from now.

While the industry standard is a six-year depreciation for data center hardware, analyst Dylan Patel warns this is risky for GPUs. Rapid annual performance gains from new models could render older chips economically useless long before they physically fail.

Hyperscalers face a strategic challenge: building massive data centers with current chips (e.g., H100) risks rapid depreciation as far more efficient chips (e.g., GB200) are imminent. This creates a 'pause' as they balance fulfilling current demand against future-proofing their costly infrastructure.

Countering the narrative of rapid burnout, CoreWeave cites historical data showing a nearly 10-year service life for older NVIDIA GPUs (K80) in major clouds. Older chips remain valuable for less intensive tasks, creating a tiered system where new chips handle frontier models and older ones serve established workloads.

Unlike railroads or telecom, where infrastructure lasts for decades, the core of AI infrastructure—semiconductor chips—becomes obsolete every 3-4 years. This creates a cycle of massive, recurring capital expenditure to maintain data centers, fundamentally changing the long-term ROI calculation for the AI arms race.

CZI's strategic focus is on expanding access to large-scale GPU clusters rather than physical lab space. This reflects a fundamental shift in biological research, where the primary capital expenditure and most critical resource is now computational power, not wet lab benches.

Responding to the AI bubble concern, IBM's CEO notes high GPU failure rates are a design choice for performance. Unlike sunken costs from past bubbles, these "stranded" hardware assets can be detuned to run at lower power, increasing their resilience and extending their useful life for other tasks.