Designing custom AI hardware is a long-term bet. Google's TPU team co-designs chips with ML researchers to anticipate future needs. They aim to build hardware for the models that will be prominent 2-6 years from now, sometimes embedding speculative features that could provide massive speedups if research trends evolve as predicted.
While purpose-built chips (ASICs) like Google's TPU are efficient, the AI industry is still in an early, experimental phase. GPUs offer the programmability and flexibility needed to develop new algorithms, as ASICs risk being hard-coded for models that quickly become obsolete.
Google is abandoning its single-line TPU strategy, now working with both Broadcom and MediaTek on different, specialized TPU designs. This reflects an industry-wide realization that no single chip can be optimal for the diverse and rapidly evolving landscape of AI tasks.
Google successfully trained its top model, Gemini 3 Pro, on its own TPUs, proving a viable alternative to NVIDIA's chips. However, because Google doesn't sell these TPUs, NVIDIA retains its monopoly pricing power over every other company in the market.
Model architecture decisions directly impact inference performance. AI company Zyphra pre-selects target hardware and then chooses model parameters—such as a hidden dimension with many powers of two—to align with how GPUs split up workloads, maximizing efficiency from day one.
True co-design between AI models and chips is currently impossible due to an "asymmetric design cycle." AI models evolve much faster than chips can be designed. By using AI to drastically speed up chip design, it becomes possible to create a virtuous cycle of co-evolution.
NVIDIA's commitment to programmable GPUs over fixed-function ASICs (like a "transformer chip") is a strategic bet on rapid AI innovation. Since models are evolving so quickly (e.g., hybrid SSM-transformers), a flexible architecture is necessary to capture future algorithmic breakthroughs.
OpenAI is designing its custom chip for flexibility, not just raw performance on current models. The team learned that major 100x efficiency gains come from evolving algorithms (e.g., dense to sparse transformers), so the hardware must be adaptable to these future architectural changes.
The current 2-3 year chip design cycle is a major bottleneck for AI progress, as hardware is always chasing outdated software needs. By using AI to slash this timeline, companies can enable a massive expansion of custom chips, optimizing performance for many at-scale software workloads.
Google created its custom TPU chip not as a long-term strategy, but from an internal crisis. Engineer Jeff Dean calculated that scaling a new speech recognition feature to all Android phones would require doubling Google's entire data center footprint, forcing the company to design a more efficient, custom chip to avoid existential costs.
While competitors like OpenAI must buy GPUs from NVIDIA, Google trains its frontier AI models (like Gemini) on its own custom Tensor Processing Units (TPUs). This vertical integration gives Google a significant, often overlooked, strategic advantage in cost, efficiency, and long-term innovation in the AI race.