The AI market has two opposing trends: a dramatic collapse in token prices for equivalent models (down 150x in 21 months) and unprecedented revenue growth. This indicates that the explosion in utilization and value creation is massively outpacing cost reductions, signaling a healthy, expanding market.
AI companies are achieving revenue milestones at an unprecedented rate. Data shows AI labs growing from $1B to $10B in revenue in roughly one year, a feat that took Salesforce 8-9 years. This signals a dramatic acceleration in market adoption and value creation.
The cost for a given level of AI capability has decreased by a factor of 100 in just one year. This radical deflation in the price of intelligence requires a complete rethinking of business models and future strategies, as intelligence becomes an abundant, cheap commodity.
The comparison of the AI hardware buildout to the dot-com "dark fiber" bubble is flawed because there are no "dark GPUs"—all compute is being used. As hardware efficiency improves and token costs fall (Jevons paradox), it will unlock countless new AI applications, ensuring that demand continues to absorb all available supply.
A paradox exists where the cost for a fixed level of AI capability (e.g., GPT-4 level) has dropped 100-1000x. However, overall enterprise spend is increasing because applications now use frontier models with massive contexts and multi-step agentic workflows, creating huge multipliers on token usage that drive up total costs.
The cost of AI, priced in "tokens by the drink," is falling dramatically. All inputs are on a downward cost curve, leading to a hyper-deflationary effect on the price of intelligence. This, in turn, fuels massive demand elasticity as more use cases become economically viable.
While the cost to achieve a fixed capability level (e.g., GPT-4 at launch) has dropped over 100x, overall enterprise spending is increasing. This paradox is explained by powerful multipliers: demand for frontier models, longer reasoning chains, and multi-step agentic workflows that consume exponentially more tokens.
Countering the narrative of insurmountable training costs, Jensen Huang argues that architectural, algorithmic, and computing stack innovations are driving down AI costs far faster than Moore's Law. He predicts a billion-fold cost reduction for token generation within a decade.
While the cost for GPT-4 level intelligence has dropped over 100x, total enterprise AI spend is rising. This is driven by multipliers: using larger frontier models for harder tasks, reasoning-heavy workflows that consume more tokens, and complex, multi-turn agentic systems.
While cutting-edge AI is extremely expensive, its cost drops dramatically fast. A reasoning benchmark that cost OpenAI $4,500 per question in late 2024 cost only $11 a year later. This steep deflation curve means even the most advanced capabilities quickly become accessible to the mass market.
Unlike the dot-com era where valuations far outpaced a small, slow user base, the current AI shift is driven by products with immediate, massive adoption and revenue. The technology is delivering value today, not just promising it for the future, which fundamentally changes the financial dynamics.