We scan new podcasts and send you the top 5 insights daily.
Contrary to the belief that AI chips quickly become obsolete, CoreWeave's CEO argues their value holds, citing average five-year client contracts as proof. Older chips like the A100 have even appreciated in price as new use cases emerge, making rapid depreciation a myth.
CoreWeave dismisses speculative analyst reports on GPU depreciation. Their metric for an asset's true value is the willingness of sophisticated buyers (hyperscalers, AI labs) to sign multi-year contracts for it. This real-world commitment is a more reliable indicator of long-term economic utility than any external model.
Unlike typical computer hardware that depreciates rapidly, H100 GPUs are trading above their launch price in secondary markets. This market anomaly, driven by the extreme and sustained compute shortage for AI, completely inverts traditional financial models for hardware assets.
According to CoreWeave's CEO, a GPU becomes obsolete not when a new chip is released, but when the power and space it consumes could be used for a higher-margin, newer chip. The decision is purely economic, based on the opportunity cost of electricity, not the hardware's technical viability.
The sustainability of the AI infrastructure boom is debated. One view is that GPUs depreciate rapidly in five years, making current spending speculative. The counterargument is that older chips will have a long, valuable life serving less complex models, akin to mainframes, making them a more durable capital investment.
To finance AI infrastructure without massive equity dilution, firms use debt collateralized by guaranteed, long-term purchase contracts from investment-grade customers. The rapidly depreciating GPUs are only secondary collateral, making the financing far less risky than it appears and debunking common criticisms about its speculative nature.
Contrary to typical hardware depreciation, GPUs like NVIDIA's H100 are becoming more valuable over time. This is because newer, more efficient AI models can generate significantly more output and value on the same hardware, tying the GPU's worth to its utility rather than its age.
Hyperscalers are extending depreciation schedules for AI hardware. While this may look like "cooking the books" to inflate earnings, it's justified by the reality that even 7-8 year old TPUs and GPUs are still running at 100% utilization for less complex AI tasks, making them valuable for longer and validating the accounting change.
The useful life of an AI chip isn't a fixed period. It ends only when a new generation offers such a significant performance and efficiency boost that it becomes more economical to replace fully paid-off, older hardware. Slower generational improvements mean longer depreciation cycles.
Countering the narrative of rapid burnout, CoreWeave cites historical data showing a nearly 10-year service life for older NVIDIA GPUs (K80) in major clouds. Older chips remain valuable for less intensive tasks, creating a tiered system where new chips handle frontier models and older ones serve established workloads.
Accusations that hyperscalers "cook the books" by extending GPU depreciation misunderstand hardware lifecycles. Older chips remain at full utilization for less demanding tasks. High operational costs (power, cooling) provide a natural economic incentive to retire genuinely unprofitable hardware, invalidating claims of artificial earnings boosts.