Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

While GPUs train models, CPUs are essential for two key workloads: running reinforcement learning environments and executing the code generated by AI. This has created a massive, often overlooked demand spike, making CPUs a critical, sold-out component in the AI infrastructure stack and a hidden bottleneck.

Related Insights

While GPUs dominate AI hardware discussions, the proliferation of AI agents is causing a significant, often overlooked, CPU shortage. Agents rely on CPUs for web queries, data processing, and other tasks needed to feed GPUs, straining existing infrastructure and driving new demand for companies like Arm and Intel.

The focus in AI has evolved from rapid software capability gains to the physical constraints of its adoption. The demand for compute power is expected to significantly outstrip supply, making infrastructure—not algorithms—the defining bottleneck for future growth.

The critical constraint on AI and future computing is not energy consumption but access to leading-edge semiconductor fabrication capacity. With data centers already consuming over 50% of advanced fab output, consumer hardware like gaming PCs will be priced out, accelerating a fundamental shift where personal devices become mere terminals for cloud-based workloads.

The focus on GPUs for AI overlooks a critical bottleneck: CPU shortages. AI agents require massive CPU power for non-GPU tasks like web queries and data prep. This demand is straining existing infrastructure and creating new market opportunities for CPU makers like ARM.

The focus on GPUs for AI overlooks a critical bottleneck: a growing CPU shortage. AI agents rely heavily on CPUs for orchestration tasks like tool calls, database queries, and web searches. This hidden demand is causing hyperscalers to lock in multi-year CPU supply contracts.

A speaker theorizes that increased cloud outages are not random. Cloud providers, rushing to buy GPUs for AI, have underinvested in refreshing their general-purpose CPU infrastructure. With CPUs now hitting their 5-year end-of-life and new AI-related CPU demand rising, the system is becoming strained and unstable.

Previously, the biggest constraint in AI was compute for training next-gen models. Now, the critical bottleneck is providing enough compute for *inference*—the real-time processing of queries from a rapidly growing user base.

SiFive's Krste Asanović highlights that while GPUs are the focus of the AI boom, the CPUs that feed them data are a critical bottleneck. As AI accelerates tasks like coding by 30x, the corresponding CPU-bound tasks like compiling also need a 30x speedup, driving demand for specialized CPU IP.

After the current memory crunch, the next AI infrastructure bottleneck will be CPU and networking. The complex orchestration required for emerging agentic AI systems will strain these resources, a trend already visible in companies like Fastly seeing demand spikes just for workload orchestration.

While GPUs get the headlines, AI expert Tae Kim warns of a major coming CPU shortage. The complex orchestration, tool calls, and database queries required by AI agents are creating huge demand for CPU cores, a trend confirmed by major chipmakers and hyperscalers.