Countering the narrative of rapid burnout, CoreWeave cites historical data showing a nearly 10-year service life for older NVIDIA GPUs (K80) in major clouds. Older chips remain valuable for less intensive tasks, creating a tiered system where new chips handle frontier models and older ones serve established workloads.

Related Insights

CoreWeave dismisses speculative analyst reports on GPU depreciation. Their metric for an asset's true value is the willingness of sophisticated buyers (hyperscalers, AI labs) to sign multi-year contracts for it. This real-world commitment is a more reliable indicator of long-term economic utility than any external model.

The sustainability of the AI infrastructure boom is debated. One view is that GPUs depreciate rapidly in five years, making current spending speculative. The counterargument is that older chips will have a long, valuable life serving less complex models, akin to mainframes, making them a more durable capital investment.

The massive investment in data centers isn't just a bet on today's models. As AI becomes more efficient, smaller yet powerful models will be deployed on older hardware. This extends the serviceable life and economic return of current infrastructure, ensuring today's data centers will still generate value years from now.

While the industry standard is a six-year depreciation for data center hardware, analyst Dylan Patel warns this is risky for GPUs. Rapid annual performance gains from new models could render older chips economically useless long before they physically fail.

Hyperscalers are extending depreciation schedules for AI hardware. While this may look like "cooking the books" to inflate earnings, it's justified by the reality that even 7-8 year old TPUs and GPUs are still running at 100% utilization for less complex AI tasks, making them valuable for longer and validating the accounting change.

NVIDIA’s business model relies on planned obsolescence. Its AI chips become obsolete every 2-3 years as new versions are released, forcing Big Tech customers into a constant, multi-billion dollar upgrade cycle for what are effectively "perishable" assets.

The useful life of an AI chip isn't a fixed period. It ends only when a new generation offers such a significant performance and efficiency boost that it becomes more economical to replace fully paid-off, older hardware. Slower generational improvements mean longer depreciation cycles.

Arguments that AI chips are viable for 5-7 years because they still function are misleading. This "sleight of hand" confuses physical durability with economic usefulness. An older chip is effectively worthless if newer models offer exponentially better performance for the price ('dollar per flop'), making it uncompetitive.

Accusations that hyperscalers "cook the books" by extending GPU depreciation misunderstand hardware lifecycles. Older chips remain at full utilization for less demanding tasks. High operational costs (power, cooling) provide a natural economic incentive to retire genuinely unprofitable hardware, invalidating claims of artificial earnings boosts.

Companies like CoreWeave collateralize massive loans with NVIDIA GPUs to fund their build-out. This creates a critical timeline problem: the industry must generate highly profitable AI workloads before the GPUs, which have a limited lifespan and depreciate quickly, wear out. The business model fails if valuable applications don't scale fast enough.