We scan new podcasts and send you the top 5 insights daily.
Unlike typical computer hardware that depreciates rapidly, H100 GPUs are trading above their launch price in secondary markets. This market anomaly, driven by the extreme and sustained compute shortage for AI, completely inverts traditional financial models for hardware assets.
CoreWeave dismisses speculative analyst reports on GPU depreciation. Their metric for an asset's true value is the willingness of sophisticated buyers (hyperscalers, AI labs) to sign multi-year contracts for it. This real-world commitment is a more reliable indicator of long-term economic utility than any external model.
The current AI moment is unique because demand outstrips supply so dramatically that even previous-generation chips and models remain valuable. They are perfectly suited for running smaller models for simpler, high-volume applications like voice transcription, creating a broad-based boom across the entire hardware and model stack.
The sustainability of the AI infrastructure boom is debated. One view is that GPUs depreciate rapidly in five years, making current spending speculative. The counterargument is that older chips will have a long, valuable life serving less complex models, akin to mainframes, making them a more durable capital investment.
Contrary to typical hardware depreciation, GPUs like NVIDIA's H100 are becoming more valuable over time. This is because newer, more efficient AI models can generate significantly more output and value on the same hardware, tying the GPU's worth to its utility rather than its age.
While the industry standard is a six-year depreciation for data center hardware, analyst Dylan Patel warns this is risky for GPUs. Rapid annual performance gains from new models could render older chips economically useless long before they physically fail.
Hyperscalers are extending depreciation schedules for AI hardware. While this may look like "cooking the books" to inflate earnings, it's justified by the reality that even 7-8 year old TPUs and GPUs are still running at 100% utilization for less complex AI tasks, making them valuable for longer and validating the accounting change.
The useful life of an AI chip isn't a fixed period. It ends only when a new generation offers such a significant performance and efficiency boost that it becomes more economical to replace fully paid-off, older hardware. Slower generational improvements mean longer depreciation cycles.
Countering the narrative of rapid burnout, CoreWeave cites historical data showing a nearly 10-year service life for older NVIDIA GPUs (K80) in major clouds. Older chips remain valuable for less intensive tasks, creating a tiered system where new chips handle frontier models and older ones serve established workloads.
Unlike the dot-com era where capital built unused "dark fiber," today's AI funding boom is different. Every dollar spent on GPUs is immediately consumed due to insatiable demand. This prevents a supply overhang, making the "circular funding" model more sustainable for now.
Companies like CoreWeave collateralize massive loans with NVIDIA GPUs to fund their build-out. This creates a critical timeline problem: the industry must generate highly profitable AI workloads before the GPUs, which have a limited lifespan and depreciate quickly, wear out. The business model fails if valuable applications don't scale fast enough.