Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Frontier models are enabling the creation of specialized, cheap Small Language Models (SLMs). As these SLMs become 'good enough' for countless vertical tasks (e.g., legal, accounting), they could collapse the market value and demand for the very frontier models that created them, leading to a hyper-deflationary cycle.

Related Insights

Beyond simple productivity gains, AI will eliminate the need for entire service-based transactions, such as paying for basic legal documents or second medical opinions. This substitution of paid services with free AI output can act as a direct deflationary headwind, a counterintuitive effect to the typical AI-fueled growth narrative.

Doug from Semi Analysis argues that the primary deflationary threat isn't just cheaper tokens, but the emergence of low-end models that can commoditize entire AI-powered solutions, creating a race to the bottom that erodes pricing power for everyone.

Creating frontier AI models is incredibly expensive, yet their value depreciates rapidly as they are quickly copied or replicated by lower-cost open-source alternatives. This forces model providers to evolve into more defensible application companies to survive.

For most enterprise tasks, massive frontier models are overkill—a "bazooka to kill a fly." Smaller, domain-specific models are often more accurate for targeted use cases, significantly cheaper to run, and more secure. They focus on being the "best-in-class employee" for a specific task, not a generalist.

If AI makes intelligence cheap and universally available, its economic value may collapse. This theory suggests that selling raw AI models could become a low-margin, utility-like business. Profitability will depend on building moats through specialized applications or regulatory capture, not on selling base intelligence.

As enterprises scale AI, the high inference costs of frontier models become prohibitive. The strategic trend is to use large models for novel tasks, then shift 90% of recurring, common workloads to specialized, cost-effective Small Language Models (SLMs). This architectural shift dramatically improves both speed and cost.

The common goal of increasing AI model efficiency could have a paradoxical outcome. If AI performance becomes radically cheaper ("too cheap to meter"), it could devalue the massive investments in compute and data center infrastructure, creating a financial crisis for the very companies that enabled the boom.

The massive capital expenditure to train a frontier AI model becomes nearly worthless in months as competitors release superior models. This makes trained models a uniquely fast-depreciating asset, creating immense pressure on labs to monetize quickly through API access or investor hype before their technological advantage evaporates completely.

The true commercial impact of AI will likely come from small, specialized "micro models" solving boring, high-volume business tasks. While highly valuable, these models are cheap to run and cannot economically justify the current massive capital expenditure on AGI-focused data centers.

Contrary to the 'winner-takes-all' narrative, the rapid pace of innovation in AI is leading to a different outcome. As rival labs quickly match or exceed each other's model capabilities, the underlying Large Language Models (LLMs) risk becoming commodities, making it difficult for any single player to justify stratospheric valuations long-term.