The massive capital expenditure in AI is largely confined to the "superintelligence quest" camp, which bets on godlike AI transforming the economy. Companies focused on applying current AI to create immediate economic value are not necessarily in a bubble.
The AI landscape has three groups: 1) Frontier labs on a "superintelligence quest," absorbing most capital. 2) Fundamental researchers who think the current approach is flawed. 3) Pragmatists building value with today's "good enough" AI.
The current AI spending spree by tech giants is historically reminiscent of the railroad and fiber-optic bubbles. These eras saw massive, redundant capital investment based on technological promise, which ultimately led to a crash when it became clear customers weren't willing to pay for the resulting products.
The current AI boom is more fundamentally sound than past tech bubbles. Tech sector earnings are greater than capital expenditures, and investments are not primarily debt-financed. The leading companies are well-capitalized with committed founders, suggesting the technology's endurance even if some valuations prove frothy.
The current AI investment surge is a dangerous "resource grab" phase, not a typical bubble. Companies are desperately securing scarce resources—power, chips, and top scientists—driven by existential fear of being left behind. This isn't a normal CapEx cycle; the spending is almost guaranteed until a dead-end is proven.
The current AI spending frenzy uniquely merges elements from all major historical bubbles—real estate (data centers), technology, loose credit, and a government backstop—making a soft landing improbable. This convergence of risk factors is unprecedented.
David Kostin argues public AI stocks aren't in a bubble because earnings growth has matched price increases. The real bubble is in private markets, driven by George Soros's "reflexivity" theory, where a recursive loop of new capital inflates valuations to unsustainable levels.
The startup landscape now operates under two different sets of rules. Non-AI companies face intense scrutiny on traditional business fundamentals like profitability. In contrast, AI companies exist in a parallel reality of 'irrational exuberance,' where compelling narratives justify sky-high valuations.
The most immediate systemic risk from AI may not be mass unemployment but an unsustainable financial market bubble. Sky-high valuations of AI-related companies pose a more significant short-term threat to economic stability than the still-developing impact of AI on the job market.
Current AI spending appears bubble-like, but it's not propping up unprofitable operations. Inference is already profitable. The immense cash burn is a deliberate, forward-looking investment in developing future, more powerful models, not a sign of a failing business model. This re-frames the financial risk.
The current AI investment boom is focused on massive infrastructure build-outs. A counterintuitive threat to this trade is not that AI fails, but that it becomes more compute-efficient. This would reduce infrastructure demand, deflating the hardware bubble even as AI proves economically valuable.