The future of compute demand is a tale of two opposing forces. Enterprises will use AI to compress redundant data and streamline operations, reducing compute costs. Consumers, however, will demand generative AI for entertainment and personalization (e.g., 'Star Wars with my face'), creating massive new compute needs.
Firms like OpenAI and Meta claim a compute shortage while also exploring selling compute capacity. This isn't a contradiction but a strategic evolution. They are buying all available supply to secure their own needs and then arbitraging the excess, effectively becoming smaller-scale cloud providers for AI.
The vast network of consumer devices represents a massive, underutilized compute resource. Companies like Apple and Tesla can leverage these devices for AI workloads when they're idle, creating a virtual cloud where users have already paid for the hardware (CapEx).
The focus in AI has evolved from rapid software capability gains to the physical constraints of its adoption. The demand for compute power is expected to significantly outstrip supply, making infrastructure—not algorithms—the defining bottleneck for future growth.
In the current market, AI companies see explosive growth through two primary vectors: attaching to the massive AI compute spend or directly replacing human labor. Companies merely using AI to improve an existing product without hitting one of these drivers risk being discounted as they lack a clear, exponential growth narrative.
The next major hardware cycle will be driven by user demand for local AI models that run on personal machines, ensuring privacy and control away from corporate or government surveillance. This shift from a purely cloud-centric paradigm will spark massive demand for more powerful personal computers and laptops.
The perceived plateau in AI model performance is specific to consumer applications, where GPT-4 level reasoning is sufficient. The real future gains are in enterprise and code generation, which still have a massive runway for improvement. Consumer AI needs better integration, not just stronger models.
AI's computational needs are not just from initial training. They compound exponentially due to post-training (reinforcement learning) and inference (multi-step reasoning), creating a much larger demand profile than previously understood and driving a billion-X increase in compute.
Arvind Krishna forecasts a 1000x drop in AI compute costs over five years. This won't just come from better chips (a 10x gain). It will be compounded by new processor architectures (another 10x) and major software optimizations like model compression and quantization (a final 10x).
The success of personal AI assistants signals a massive shift in compute usage. While training models is resource-intensive, the next 10x in demand will come from widespread, continuous inference as millions of users run these agents. This effectively means consumers are buying fractions of datacenter GPUs like the GB200.
The biggest risk to the massive AI compute buildout isn't that scaling laws will break, but that consumers will be satisfied with a "115 IQ" AI running for free on their devices. If edge AI is sufficient for most tasks, it undermines the economic model for ever-larger, centralized "God models" in the cloud.