Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

AMD's success isn't just about stealing market share from competitors. The rise of 'agentic inference' in AI is massively expanding the total addressable market for data center CPUs. This creates a "share-grabbing" scenario where new demand provides greenfield growth opportunities for all major players.

Related Insights

While GPUs dominate AI hardware discussions, the proliferation of AI agents is causing a significant, often overlooked, CPU shortage. Agents rely on CPUs for web queries, data processing, and other tasks needed to feed GPUs, straining existing infrastructure and driving new demand for companies like Arm and Intel.

Meta is deprioritizing its custom silicon program, opting for large orders of AMD's chips. This reflects a broader trend among hyperscalers: the urgent need for massive, immediate compute power is outweighing the long-term strategic goal of self-sufficiency and avoiding the "Nvidia tax."

AI's evolution from training-heavy (GPU-dominant) to inference- and agent-heavy (CPU-intensive) workflows could invert the traditional data center chip ratio. This represents a seismic shift, creating a massive tailwind for CPU manufacturers like Intel.

The focus on GPUs for AI overlooks a critical bottleneck: CPU shortages. AI agents require massive CPU power for non-GPU tasks like web queries and data prep. This demand is straining existing infrastructure and creating new market opportunities for CPU makers like ARM.

The current AI boom focuses on GPUs for "thinking" (Gen AI). The next phase, "Agentic AI" for "doing," will rely heavily on CPUs for task orchestration and memory for context, creating new investment opportunities in this previously overshadowed hardware.

The demand for AI processing power so vastly outstrips supply that it creates a "compute deficit." This forces major AI players to adopt any viable chip solution they can find, including from AMD. It's not about being better than NVIDIA; it's about being available, ensuring a market for second and third-tier suppliers.

The AI compute narrative is shifting from GPUs for training to CPUs for agentic workflows. This creates a massive new demand for processors to orchestrate tasks, manage inference, and coordinate data centers, directly fueling Intel's comeback and flipping the expected CPU-to-GPU ratio.

The AI narrative has focused on GPUs for training, but the proliferation of AI agents for task execution is creating a massive, overlooked demand for CPUs. This shift to inference and orchestration is reversing Intel's recent decline.

The inference market is too large to remain monolithic. It will fragment into specialized platforms for different use cases like real-time video, long-running agents, or language models. This specialization will extend to hardware, with high-throughput, low-latency-need tasks (like agents) favoring cheaper AMD/Intel chips over NVIDIA's top GPUs.

SiFive's Krste Asanović highlights that while GPUs are the focus of the AI boom, the CPUs that feed them data are a critical bottleneck. As AI accelerates tasks like coding by 30x, the corresponding CPU-bound tasks like compiling also need a 30x speedup, driving demand for specialized CPU IP.