Beyond the simple training-inference binary, Arm's CEO sees a third category: smaller, specialized models for reinforcement learning. These chips will handle both training and inference, acting like 'student teachers' taught by giant foundational models.
The next wave of AI silicon may pivot from today's compute-heavy architectures to memory-centric ones optimized for inference. This fundamental shift would allow high-performance chips to be produced on older, more accessible 7-14nm manufacturing nodes, disrupting the current dependency on cutting-edge fabs.
AI labs like Anthropic find that mid-tier models can be trained with reinforcement learning to outperform their largest, most expensive models in just a few months, accelerating the pace of capability improvements.
The massive investment in data centers isn't just a bet on today's models. As AI becomes more efficient, smaller yet powerful models will be deployed on older hardware. This extends the serviceable life and economic return of current infrastructure, ensuring today's data centers will still generate value years from now.
AI progress was expected to stall in 2024-2025 due to hardware limitations on pre-training scaling laws. However, breakthroughs in post-training techniques like reasoning and test-time compute provided a new vector for improvement, bridging the gap until next-generation chips like NVIDIA's Blackwell arrived.
Tesla's decision to stop developing its Dojo training supercomputer is not a failure. It's a strategic shift to focus on designing hyper-efficient inference chips for its vehicles and robots. This vertical integration at the edge, where real-world decisions are made, is seen as more critical than competing with NVIDIA on training hardware.
Anthropic's choice to purchase Google's TPUs via Broadcom, rather than directly or by designing its own chips, indicates a new phase in the AI hardware market. It highlights the rise of specialized manufacturers as key suppliers, creating a more complex and diversified hardware ecosystem beyond just Nvidia and the major AI labs.
OpenAI is designing its custom chip for flexibility, not just raw performance on current models. The team learned that major 100x efficiency gains come from evolving algorithms (e.g., dense to sparse transformers), so the hardware must be adaptable to these future architectural changes.
AI's computational needs are not just from initial training. They compound exponentially due to post-training (reinforcement learning) and inference (multi-step reasoning), creating a much larger demand profile than previously understood and driving a billion-X increase in compute.
The current 2-3 year chip design cycle is a major bottleneck for AI progress, as hardware is always chasing outdated software needs. By using AI to slash this timeline, companies can enable a massive expansion of custom chips, optimizing performance for many at-scale software workloads.
The true commercial impact of AI will likely come from small, specialized "micro models" solving boring, high-volume business tasks. While highly valuable, these models are cheap to run and cannot economically justify the current massive capital expenditure on AGI-focused data centers.