Tech giants often initiate custom chip projects not with the primary goal of mass deployment, but to create negotiating power against incumbents like NVIDIA. The threat of a viable alternative is enough to secure better pricing and allocation, making the R&D cost a strategic investment.

Related Insights

By funding and backstopping CoreWeave, which exclusively uses its GPUs, NVIDIA establishes its hardware as the default for the AI cloud. This gives NVIDIA leverage over major customers like Microsoft and Amazon, who are developing their own chips. It makes switching to proprietary silicon more difficult, creating a competitive moat based on market structure, not just technology.

Nvidia's staggering revenue growth and 56% net profit margins are a direct cost to its largest customers (AWS, Google, OpenAI). This incentivizes them to form a defacto alliance to develop and adopt alternative chips to commoditize the accelerator market and reclaim those profits.

Unlike competitors focused on vertical integration, Microsoft's "hyperscaler" strategy prioritizes supporting a long tail of diverse customers and models. This makes a hyper-optimized in-house chip less urgent. Furthermore, their IP rights to OpenAI's hardware efforts provide them with access to cutting-edge designs without bearing all the development risk.

For a hyperscaler, the main benefit of designing a custom AI chip isn't necessarily superior performance, but gaining control. It allows them to escape the supply allocations dictated by NVIDIA and chart their own course, even if their chip is slightly less performant or more expensive to deploy.

Google training its top model, Gemini 3 Pro, on its own TPUs demonstrates a viable alternative to NVIDIA's chips. However, because Google does not sell its TPUs, NVIDIA remains the only seller for every other company, effectively maintaining monopoly pricing power over the rest of the market.

Hyperscalers face a strategic challenge: building massive data centers with current chips (e.g., H100) risks rapid depreciation as far more efficient chips (e.g., GB200) are imminent. This creates a 'pause' as they balance fulfilling current demand against future-proofing their costly infrastructure.

Even if Google's TPU doesn't win significant market share, its existence as a viable alternative gives large customers like OpenAI critical leverage. The mere threat of switching to TPUs forces NVIDIA to offer more favorable terms, such as discounts or strategic equity investments, effectively capping its pricing power.

Major AI labs aren't just evaluating Google's TPUs for technical merit; they are using the mere threat of adopting a viable alternative to extract significant concessions from Nvidia. This strategic leverage forces Nvidia to offer better pricing, priority access, or other favorable terms to maintain its market dominance.

The competitive threat from custom ASICs is being neutralized as NVIDIA evolves from a GPU company to an "AI factory" provider. It is now building its own specialized chips (e.g., CPX) for niche workloads, turning the ASIC concept into a feature of its own disaggregated platform rather than an external threat.

While competitors like OpenAI must buy GPUs from NVIDIA, Google trains its frontier AI models (like Gemini) on its own custom Tensor Processing Units (TPUs). This vertical integration gives Google a significant, often overlooked, strategic advantage in cost, efficiency, and long-term innovation in the AI race.