Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

According to George Hotz, trained AI models are the fastest depreciating assets ever created. A state-of-the-art model that cost $100M to train can be surpassed in months, making its value plummet. This economic reality suggests that withholding models for "safety" also serves to generate hype before its competitive edge disappears.

Related Insights

Despite being a commodity business with high costs and low defensibility, AI foundation models command massive valuations. They function as a 'hope' asset where investors park capital based on narrative, similar to how gold is used in uncertain times, rather than on financial fundamentals.

The hosts challenge the conventional accounting of AI training runs as R&D (OpEx). They propose viewing a trained model as a capital asset (CapEx) with a multi-year lifespan, capable of generating revenue like a profitable mini-company. This re-framing is critical for valuation, as a company could have a long tail of profitable legacy models serving niche user bases.

The ability to generate software with AI is like getting newly printed money before inflation hits. For a limited time, those who can leverage AI to build software cheaply have a massive advantage before the market reprices the value of software development downwards for everyone.

Creating frontier AI models is incredibly expensive, yet their value depreciates rapidly as they are quickly copied or replicated by lower-cost open-source alternatives. This forces model providers to evolve into more defensible application companies to survive.

While the industry standard is a six-year depreciation for data center hardware, analyst Dylan Patel warns this is risky for GPUs. Rapid annual performance gains from new models could render older chips economically useless long before they physically fail.

The paradoxical financial state of AI labs: individual models can generate healthy gross margins from inference, but the parent company operates at a loss. This is due to the massive, exponentially increasing R&D costs required to train the next, more powerful model.

Arguments that AI chips are viable for 5-7 years because they still function are misleading. This "sleight of hand" confuses physical durability with economic usefulness. An older chip is effectively worthless if newer models offer exponentially better performance for the price ('dollar per flop'), making it uncompetitive.

The AI landscape is uniquely challenging due to the rapid depreciation of both models (new ones top leaderboards weekly) and hardware (Nvidia launched three new SKUs in one year). This creates a constant, complex management burden, justifying the need for platforms that abstract away these choices.

Despite billions in funding, large AI models face a difficult path to profitability. The immense training cost is undercut by competitors creating similar models for a fraction of the price and, more critically, the ability for others to reverse-engineer and extract the weights from existing models, eroding any competitive moat.

Contrary to the 'winner-takes-all' narrative, the rapid pace of innovation in AI is leading to a different outcome. As rival labs quickly match or exceed each other's model capabilities, the underlying Large Language Models (LLMs) risk becoming commodities, making it difficult for any single player to justify stratospheric valuations long-term.

George Hotz Argues Trained AI Models Are History's Fastest Depreciating Assets | RiffOn