Current AI spending appears bubble-like, but it's not propping up unprofitable operations. Inference is already profitable. The immense cash burn is a deliberate, forward-looking investment in developing future, more powerful models, not a sign of a failing business model. This re-frames the financial risk.
The strongest evidence that corporate AI spending is generating real ROI is that major tech companies are not just re-ordering NVIDIA's chips, but accelerating those orders quarter over quarter. This sustained, growing demand from repeat customers validates the AI trend as a durable boom.
The AI race has been a prisoner's dilemma where companies spend massively, fearing competitors will pull ahead. As the cost of next-gen systems like Blackwell and Rubin becomes astronomical, the sheer economics will force a shift. Decision-making will be dominated by ROI calculations rather than the existential dread of slowing down.
The current AI boom isn't just another tech bubble; it's a "bubble with bigger variance." The potential for massive upswings is matched by the risk of equally significant downswings. Investors and founders must have an unusually high tolerance for risk and volatility to succeed.
Despite bubble fears, Nvidia’s record earnings signal a virtuous cycle. The real long-term growth is not just from model training but from the coming explosion in inference demand required for AI agents, robotics, and multimodal AI integrated into every device and application.
Current AI investment patterns mirror the "round-tripping" seen in the late '90s tech bubble. For example, NVIDIA invests billions in a startup like OpenAI, which then uses that capital to purchase NVIDIA chips. This creates an illusion of demand and inflated valuations, masking the lack of real, external customer revenue.
Unlike the speculative "dark fiber" buildout of the dot-com bubble, today's AI infrastructure race is driven by real, immediate, and overwhelming demand. The problem isn't a lack of utilization for built capacity; it's a constant struggle to build supply fast enough to meet customer needs.
AI companies operate under the assumption that LLM prices will trend towards zero. This strategic bet means they intentionally de-prioritize heavy investment in cost optimization today, focusing instead on capturing the market and building features, confident that future, cheaper models will solve their margin problems for them.
OpenAI now projects spending $115 billion by 2029, a staggering $80 billion more than previously forecast. This massive cash burn funds a vertical integration strategy, including custom chips and data centers, positioning OpenAI to compete directly with infrastructure providers like Microsoft Azure and Google Cloud.
While the AI capex boom may seem unsustainable, the mechanics of shorting it (e.g., buying puts) reveal the extreme difficulty of the trade. The bet requires being correct not just on the eventual downturn but on its precise timing. The risk of losing the entire premium makes it an unattractive risk-adjusted bet.
Michael Burry, known for predicting the 2008 crash, argues the AI bubble isn't about the technology's potential but about the massive capital expenditure on infrastructure (chips, data centers) that he believes far outpaces actual end-user demand and economic utility.