We scan new podcasts and send you the top 5 insights daily.
CEO Chuck Robbins credits the 2016 acquisition of Israeli silicon company Leaba as the critical move that allows Cisco to compete for hyperscaler and AI business. This in-house capability to design high-performance networking silicon differentiates them from competitors reliant on generic merchant silicon, giving them a key strategic advantage.
Cisco differentiates its networking business from NVIDIA's by focusing on connecting clusters across a data center ('scale-out') and connecting separate data centers ('scale-across'). NVIDIA primarily dominates 'scale-up' networking within a single rack. This complementary approach allows Cisco to partner with NVIDIA while still carving out its own massive market.
While AI training is data-center-intensive, Cisco's CEO sees the move to AI inference as a massive growth opportunity. Inference will happen at distributed edge locations to be close to users, requiring robust, high-performance networks to connect everything, which plays directly into the company's core strengths.
Cisco's SVP Vijoy Pandey reframes the company's core identity as enabling horizontal 'scale-out' through distributed systems. This directly contrasts with the dominant AI trend of 'scaling up' by creating ever-larger, monolithic models, positioning Cisco to power a future of collaborative, distributed AI.
Unlike the past where Cisco could build general-purpose silicon for all customers, the immense and specific demands of AI workloads from hyperscalers require custom chip designs. Each major cloud provider effectively becomes a unique market demanding bespoke technology, fundamentally changing the hardware design process.
With AI infrastructure spend topping $100B annually, hyperscalers like Amazon and Google are vertically integrating. They now manage everything from data center construction and micro-nuclear power to designing their own custom chips. For them, custom silicon has become a 'rounding error' in their budget and a key strategy to optimize costs.
When developing new technologies like networking for space data centers, Cisco's CEO aims for a strategic balance. He wants to be a leader in the new market but avoids the high-risk, high-cost position of being the absolute first mover, letting others prove out the most fundamental concepts first.
Tech giants often initiate custom chip projects not with the primary goal of mass deployment, but to create negotiating power against incumbents like NVIDIA. The threat of a viable alternative is enough to secure better pricing and allocation, making the R&D cost a strategic investment.
Cisco's OutShift incubator focuses on enabling distributed systems rather than building monolithic ones. Their strategy for both AI and quantum computing is not to create the most powerful single agent or computer, but to build the network fabric that connects them all.
Cisco is benefiting from the AI build-out on the networking side. Despite a market overreaction to a small margin dip, the company posted strong earnings and guidance. Its successful integration of Splunk and foundational role in networking make it an attractive, undervalued AI investment.
For a hyperscaler, the main benefit of designing a custom AI chip isn't necessarily superior performance, but gaining control. It allows them to escape the supply allocations dictated by NVIDIA and chart their own course, even if their chip is slightly less performant or more expensive to deploy.