Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

The primary bottleneck for AI inference is now memory (HBM), not compute. To circumvent this, industry giants Nvidia and AWS are making multi-billion dollar deals for systems from Groq and Cerebrus that use on-chip SRAM, which is faster and not subject to the same supply constraints.

Related Insights

The AI inference process involves two distinct phases: "prefill" (reading the prompt, which is compute-bound) and "decode" (writing the response, which is memory-bound). NVIDIA GPUs excel at prefill, while companies like Grok optimize for decode. The Grok-NVIDIA deal signals a future of specialized, complementary hardware rather than one-size-fits-all chips.

Existing AI chips force a trade-off: high-throughput HBM memory (NVIDIA, Google) has high latency, while low-latency SRAM memory (Grok) has poor throughput. MatX's architecture combines both, putting model weights in fast SRAM and inference data in high-capacity HBM to achieve both low latency and high throughput.

AI workloads are limited by memory bandwidth, not capacity. While commodity DRAM offers more bits per wafer, its bandwidth is over an order of magnitude lower than specialized HBM. This speed difference would starve the GPU's compute cores, making the extra capacity useless and creating a massive performance bottleneck.

The next wave of AI silicon may pivot from today's compute-heavy architectures to memory-centric ones optimized for inference. This fundamental shift would allow high-performance chips to be produced on older, more accessible 7-14nm manufacturing nodes, disrupting the current dependency on cutting-edge fabs.

While competitors chased cutting-edge physics, AI chip company Groq used a more conservative process technology but loaded its chip with on-die memory (SRAM). This seemingly less advanced but different architectural choice proved perfectly suited for the "decode" phase of AI inference, a critical bottleneck that led to its licensing deal with NVIDIA.

NVIDIA's approach requires connecting thousands of Grok chips, creating latency bottlenecks. Cerebras's CEO argues its single, integrated wafer-scale system avoids this "interconnect tax," offering superior memory bandwidth and performance for massive models by eliminating the wiring between thousands of tiny chips.

While NVIDIA's GPUs have been the primary AI constraint, the bottleneck is now moving to other essential subsystems. Memory, networking interconnects, and power management are emerging as the next critical choke points, signaling a new wave of investment opportunities in the hardware stack beyond core compute.

Nvidia bought Grok not just for its chips, but for its specialized SRAM architecture. This technology excels at low-latency inference, a segment where users are now willing to pay a premium for speed. This strategic purchase diversifies Nvidia's portfolio to capture the emerging, high-value market of agentic reasoning workloads.

While training has been the focus, user experience and revenue happen at inference. OpenAI's massive deal with chip startup Cerebrus is for faster inference, showing that response time is a critical competitive vector that determines if AI becomes utility infrastructure or remains a novelty.

VC Josh Wolfe argues the AI narrative will shift from data center dominance to on-device inference. Citing Apple research on running LLMs on flash memory, he predicts a coming glut in data center capacity and a scarcity of on-device memory, favoring players like Micron and Samsung.