We scan new podcasts and send you the top 5 insights daily.
According to CoreWeave's CEO, a GPU becomes obsolete not when a new chip is released, but when the power and space it consumes could be used for a higher-margin, newer chip. The decision is purely economic, based on the opportunity cost of electricity, not the hardware's technical viability.
The standard for measuring large compute deals has shifted from number of GPUs to gigawatts of power. This provides a normalized, apples-to-apples comparison across different chip generations and manufacturers, acknowledging that energy is the primary bottleneck for building AI data centers.
When power (watts) is the primary constraint for data centers, the total cost of compute becomes secondary. The crucial metric is performance-per-watt. This gives a massive pricing advantage to the most efficient chipmakers, as customers will pay anything for hardware that maximizes output from their limited power budget.
Contrary to typical hardware depreciation, GPUs like NVIDIA's H100 are becoming more valuable over time. This is because newer, more efficient AI models can generate significantly more output and value on the same hardware, tying the GPU's worth to its utility rather than its age.
While the industry standard is a six-year depreciation for data center hardware, analyst Dylan Patel warns this is risky for GPUs. Rapid annual performance gains from new models could render older chips economically useless long before they physically fail.
Contrary to the belief that AI chips quickly become obsolete, CoreWeave's CEO argues their value holds, citing average five-year client contracts as proof. Older chips like the A100 have even appreciated in price as new use cases emerge, making rapid depreciation a myth.
The useful life of an AI chip isn't a fixed period. It ends only when a new generation offers such a significant performance and efficiency boost that it becomes more economical to replace fully paid-off, older hardware. Slower generational improvements mean longer depreciation cycles.
Contrary to the assumption that customers only want the latest chips, Nvidia's older H200s are still being heavily purchased. This is because they fit the power profile of older data centers that cannot support the massive energy draw of newer systems, making them a more practical and immediately profitable choice for many operators.
Countering the narrative of rapid burnout, CoreWeave cites historical data showing a nearly 10-year service life for older NVIDIA GPUs (K80) in major clouds. Older chips remain valuable for less intensive tasks, creating a tiered system where new chips handle frontier models and older ones serve established workloads.
Arguments that AI chips are viable for 5-7 years because they still function are misleading. This "sleight of hand" confuses physical durability with economic usefulness. An older chip is effectively worthless if newer models offer exponentially better performance for the price ('dollar per flop'), making it uncompetitive.
Accusations that hyperscalers "cook the books" by extending GPU depreciation misunderstand hardware lifecycles. Older chips remain at full utilization for less demanding tasks. High operational costs (power, cooling) provide a natural economic incentive to retire genuinely unprofitable hardware, invalidating claims of artificial earnings boosts.