We scan new podcasts and send you the top 5 insights daily.
For its next-generation V7 TPU AI chip, Google is diversifying its supply chain. It's retaining incumbent Broadcom for the complex 'training' version while bringing in low-cost entrant Mediatek for the 'inference' version. This sophisticated strategy mitigates supply risk while keeping critical IP with a trusted partner.
Google's strategy isn't just to sell AI chips; it's a platform play. By offering its powerful and potentially cheaper TPUs to companies, Google can create a powerful incentive for those customers to run their entire AI workloads on Google Cloud, creating a sticky, integrated ecosystem that challenges AWS and Azure.
Google is offering its TPUs externally for the first time as a strategic move to gain market share while it has a temporary hardware advantage over Nvidia. This classic tactic aims to build a crucial install base that can be upgraded later, even after its competitive performance edge inevitably narrows.
The competitive landscape for AI chips is not a crowded field but a battle between two primary forces: NVIDIA’s integrated system (hardware, software, networking) and Google's TPU. Other players like AMD and Broadcom are effectively a combined secondary challenger offering an open alternative.
Google is abandoning its single-line TPU strategy, now working with both Broadcom and MediaTek on different, specialized TPU designs. This reflects an industry-wide realization that no single chip can be optimal for the diverse and rapidly evolving landscape of AI tasks.
Google training its top model, Gemini 3 Pro, on its own TPUs demonstrates a viable alternative to NVIDIA's chips. However, because Google does not sell its TPUs, NVIDIA remains the only seller for every other company, effectively maintaining monopoly pricing power over the rest of the market.
Anthropic's choice to purchase Google's TPUs via Broadcom, rather than directly or by designing its own chips, indicates a new phase in the AI hardware market. It highlights the rise of specialized manufacturers as key suppliers, creating a more complex and diversified hardware ecosystem beyond just Nvidia and the major AI labs.
OpenAI is actively diversifying its partners across the supply chain—multiple cloud providers (Microsoft, Oracle), GPU designers (Nvidia, AMD), and foundries. This classic "commoditize your compliments" strategy prevents any single supplier from gaining excessive leverage or capturing all the profit margin.
To mitigate dependency on NVIDIA, Meta is actively diversifying its AI hardware supply chain. It signed a major deal with Google to use its Tensor Processing Units (TPUs), which are pitched as a viable and potentially more cost-effective alternative for training large-scale AI models.
Google created its custom TPU chip not as a long-term strategy, but from an internal crisis. Engineer Jeff Dean calculated that scaling a new speech recognition feature to all Android phones would require doubling Google's entire data center footprint, forcing the company to design a more efficient, custom chip to avoid existential costs.
While competitors like OpenAI must buy GPUs from NVIDIA, Google trains its frontier AI models (like Gemini) on its own custom Tensor Processing Units (TPUs). This vertical integration gives Google a significant, often overlooked, strategic advantage in cost, efficiency, and long-term innovation in the AI race.