We scan new podcasts and send you the top 5 insights daily.
While the unit cost of AI inference has plummeted 50x, overall spending on AI is surging. This is a textbook example of Jevons paradox, where radical efficiency gains lead to increased consumption and higher total expenditure as new applications become economically viable.
The cost for a given level of AI capability has decreased by a factor of 100 in just one year. This radical deflation in the price of intelligence requires a complete rethinking of business models and future strategies, as intelligence becomes an abundant, cheap commodity.
The comparison of the AI hardware buildout to the dot-com "dark fiber" bubble is flawed because there are no "dark GPUs"—all compute is being used. As hardware efficiency improves and token costs fall (Jevons paradox), it will unlock countless new AI applications, ensuring that demand continues to absorb all available supply.
A paradox exists where the cost for a fixed level of AI capability (e.g., GPT-4 level) has dropped 100-1000x. However, overall enterprise spend is increasing because applications now use frontier models with massive contexts and multi-step agentic workflows, creating huge multipliers on token usage that drive up total costs.
While the per-unit cost of using AI has plummeted, total enterprise spending has soared. This is a classic example of the Jevons paradox: efficiency gains and lower prices are unlocking entirely new use cases that were previously uneconomical, leading to a net increase in overall consumption and total expenditure.
The cost of AI, priced in "tokens by the drink," is falling dramatically. All inputs are on a downward cost curve, leading to a hyper-deflationary effect on the price of intelligence. This, in turn, fuels massive demand elasticity as more use cases become economically viable.
While the cost to achieve a fixed capability level (e.g., GPT-4 at launch) has dropped over 100x, overall enterprise spending is increasing. This paradox is explained by powerful multipliers: demand for frontier models, longer reasoning chains, and multi-step agentic workflows that consume exponentially more tokens.
While the cost for GPT-4 level intelligence has dropped over 100x, total enterprise AI spend is rising. This is driven by multipliers: using larger frontier models for harder tasks, reasoning-heavy workflows that consume more tokens, and complex, multi-turn agentic systems.
The AI market has two opposing trends: a dramatic collapse in token prices for equivalent models (down 150x in 21 months) and unprecedented revenue growth. This indicates that the explosion in utilization and value creation is massively outpacing cost reductions, signaling a healthy, expanding market.
The host experienced Jevons paradox firsthand: after switching from a barely-used enterprise ChatGPT to the more efficient OpenClaw, usage exploded. Costs trended towards exceeding the company's payroll, highlighting how efficiency gains in AI can lead to unsustainable consumption increases.
While cutting-edge AI is extremely expensive, its cost drops dramatically fast. A reasoning benchmark that cost OpenAI $4,500 per question in late 2024 cost only $11 a year later. This steep deflation curve means even the most advanced capabilities quickly become accessible to the mass market.