Investor Michael Burry argues that hyperscalers overstate profits by depreciating GPUs over 5-6 years when their economic usefulness is only 2-3 years due to rapid technological advances. This accounting practice, which Burry calls a "common fraud," masks true costs and inflates valuations.

Related Insights

Michael Burry's thesis is that aggressive stock-based compensation (SBC) at companies like Nvidia significantly distorts their valuations. By treating SBC as a true owner's cost, a stock appearing to trade at 30 times earnings might actually be closer to 60 times, mirroring dot-com era accounting concerns.

The sustainability of the AI infrastructure boom is debated. One view is that GPUs depreciate rapidly in five years, making current spending speculative. The counterargument is that older chips will have a long, valuable life serving less complex models, akin to mainframes, making them a more durable capital investment.

While the industry standard is a six-year depreciation for data center hardware, analyst Dylan Patel warns this is risky for GPUs. Rapid annual performance gains from new models could render older chips economically useless long before they physically fail.

Hyperscalers are extending depreciation schedules for AI hardware. While this may look like "cooking the books" to inflate earnings, it's justified by the reality that even 7-8 year old TPUs and GPUs are still running at 100% utilization for less complex AI tasks, making them valuable for longer and validating the accounting change.

The debate over AI chip depreciation highlights a flaw in traditional accounting. GAAP was designed for physical assets with predictable lifecycles, not for digital infrastructure like GPUs whose value creation is dynamic. This mismatch leads to accusations of financial manipulation where firms are simply following outdated rules.

The useful life of an AI chip isn't a fixed period. It ends only when a new generation offers such a significant performance and efficiency boost that it becomes more economical to replace fully paid-off, older hardware. Slower generational improvements mean longer depreciation cycles.

Some tech companies have doubled the depreciable life of their AI hardware (e.g., from 3 to 6 years) for accounting purposes. This inflates reported earnings, but it contradicts the economic reality that rapid innovation is shortening the chips' actual useful life, creating a significant red flag for earnings quality.

While the current AI phase is all about capital spending, a future catalyst for a downturn will emerge when the depreciation and amortization schedules for this hardware kick in. Unlike long-lasting infrastructure like railroads, short-term tech assets will create a significant financial drag in a few years.

Arguments that AI chips are viable for 5-7 years because they still function are misleading. This "sleight of hand" confuses physical durability with economic usefulness. An older chip is effectively worthless if newer models offer exponentially better performance for the price ('dollar per flop'), making it uncompetitive.

Accusations that hyperscalers "cook the books" by extending GPU depreciation misunderstand hardware lifecycles. Older chips remain at full utilization for less demanding tasks. High operational costs (power, cooling) provide a natural economic incentive to retire genuinely unprofitable hardware, invalidating claims of artificial earnings boosts.