The economic principle that 'shortages create gluts' is playing out in AI. The current scarcity of specialized talent and chips creates massive profit incentives for new supply to enter the market, which will eventually lead to an overcorrection and a future glut, as seen historically in the chip industry.

Related Insights

Firms like OpenAI and Meta claim a compute shortage while also exploring selling compute capacity. This isn't a contradiction but a strategic evolution. They are buying all available supply to secure their own needs and then arbitraging the excess, effectively becoming smaller-scale cloud providers for AI.

The massive capital investment in AI infrastructure is predicated on the belief that more compute will always lead to better models (scaling laws). If this relationship breaks, the glut of data center capacity will have no ROI, triggering a severe recession in the tech and semiconductor sectors.

When power (watts) is the primary constraint for data centers, the total cost of compute becomes secondary. The crucial metric is performance-per-watt. This gives a massive pricing advantage to the most efficient chipmakers, as customers will pay anything for hardware that maximizes output from their limited power budget.

While an AI bubble seems negative, the overproduction of compute power creates a favorable environment for companies that consume it. As prices for compute drop, their cost of goods sold decreases, leading to higher gross margins and better business fundamentals.

The current AI investment surge is a dangerous "resource grab" phase, not a typical bubble. Companies are desperately securing scarce resources—power, chips, and top scientists—driven by existential fear of being left behind. This isn't a normal CapEx cycle; the spending is almost guaranteed until a dead-end is proven.

While compute and capital are often cited as AI bottlenecks, the most significant limiting factor is the lack of human talent. There is a fundamental shortage of AI practitioners and data scientists, a gap that current university output and immigration policies are failing to fill, making expertise the most constrained resource.

The AI buildout won't be stopped by technological limits or lack of demand. The true barrier will be economics: when the marginal capital provider determines that the diminishing returns from massive investments no longer justify the cost.

The comparison of the AI hardware buildout to the dot-com "dark fiber" bubble is flawed because there are no "dark GPUs"—all compute is being used. As hardware efficiency improves and token costs fall (Jevons paradox), it will unlock countless new AI applications, ensuring that demand continues to absorb all available supply.

The massive capital rush into AI infrastructure mirrors past tech cycles where excess capacity was built, leading to unprofitable projects. While large tech firms can absorb losses, the standalone projects and their supplier ecosystems (power, materials) are at risk if anticipated demand doesn't materialize.

The current AI investment boom is focused on massive infrastructure build-outs. A counterintuitive threat to this trade is not that AI fails, but that it becomes more compute-efficient. This would reduce infrastructure demand, deflating the hardware bubble even as AI proves economically valuable.