Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

By launching its own CPU and competing directly with its licensing customers like NVIDIA and Qualcomm, Arm is creating a conflict of interest. This bold move could push its own partners to adopt open-source alternatives like RISC-V to de-risk their supply chains and avoid dependency on a direct competitor.

Related Insights

NVIDIA is moving "up the stack" from chips to an AI agent software platform to diversify its business and create a new moat beyond its CUDA system. By courting enterprise partners, NVIDIA aims to maintain infrastructure dominance even if AI labs succeed with their own custom silicon, reducing reliance on NVIDIA GPUs.

Tech giants often initiate custom chip projects not with the primary goal of mass deployment, but to create negotiating power against incumbents like NVIDIA. The threat of a viable alternative is enough to secure better pricing and allocation, making the R&D cost a strategic investment.

NVIDIA's commitment to CUDA's backward compatibility prevents it from making fundamental changes to its chip architecture. This creates an opportunity for new players like MatX to build chips from a blank slate, optimized purely for modern LLM workloads without being tied to a decade-old programming model.

Large tech companies are actively diversifying their AI chip supply to avoid lock-in with NVIDIA. However, the true challenge isn't just hardware performance. NVIDIA's powerful moat is its extensive software and developer ecosystem, which competitors must also build to truly break free from its market dominance.

For a hyperscaler, the main benefit of designing a custom AI chip isn't necessarily superior performance, but gaining control. It allows them to escape the supply allocations dictated by NVIDIA and chart their own course, even if their chip is slightly less performant or more expensive to deploy.

TSMC's "pure-play foundry" model, where it only manufactures chips and doesn't design its own, builds deep trust. Customers like Apple and NVIDIA can share sensitive designs without fear of competition, unlike with rivals Intel and Samsung who have their own chip products.

For its next-generation V7 TPU AI chip, Google is diversifying its supply chain. It's retaining incumbent Broadcom for the complex 'training' version while bringing in low-cost entrant Mediatek for the 'inference' version. This sophisticated strategy mitigates supply risk while keeping critical IP with a trusted partner.

Nvidia is heavily investing in its own open-source models like Nemo Tron. This strategy ensures that as the open-source ecosystem grows, demand for its hardware also grows, positioning Nvidia's chips as the default platform and reducing reliance on closed-source model providers who act as intermediaries.

Limiting chip exports to certain nations will force them to develop their own parallel hardware and software. This bifurcation creates a new global competitor and risks making the West's technology stack obsolete if the rival ecosystem becomes dominant.

OpenAI's compute deal with Cerebras, alongside deals with AMD and Nvidia, shows that hyperscalers are aggressively diversifying their AI chip supply. This creates a massive opportunity for smaller, specialized silicon teams, heralding a new competitive era reminiscent of the PC wars.