Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Cisco's SVP Vijoy Pandey reframes the company's core identity as enabling horizontal 'scale-out' through distributed systems. This directly contrasts with the dominant AI trend of 'scaling up' by creating ever-larger, monolithic models, positioning Cisco to power a future of collaborative, distributed AI.

Related Insights

To create an integrated product suite, Cisco dismantled divisional silos and restructured into a platform-based organization. An org chart directly dictates product architecture, so leaders must design their organization to produce the desired integrated outcome, not just individual products.

Cisco differentiates its networking business from NVIDIA's by focusing on connecting clusters across a data center ('scale-out') and connecting separate data centers ('scale-across'). NVIDIA primarily dominates 'scale-up' networking within a single rack. This complementary approach allows Cisco to partner with NVIDIA while still carving out its own massive market.

Contrary to fears of a monopoly, the AI market is heading toward a diverse ecosystem. The proliferation of open-weight models and specialized tooling allows companies to build and control their own differentiated AI systems rather than simply renting intelligence token-by-token from a handful of large labs.

Cisco's OutShift incubator focuses on enabling distributed systems rather than building monolithic ones. Their strategy for both AI and quantum computing is not to create the most powerful single agent or computer, but to build the network fabric that connects them all.

The current focus on building massive, centralized AI training clusters represents the 'mainframe' era of AI. The next three years will see a shift toward a distributed model, similar to computing's move from mainframes to PCs. This involves pushing smaller, efficient inference models out to a wide array of devices.

Legacy companies are siloed, creating IT "spaghetti" that blocks AI progress. In contrast, AI-native organizations structure themselves around a central "AI factory" or unified data platform. Business units function like apps on an iPhone, accessing shared, controlled data to rapidly innovate and deploy new services.

Human intelligence leaped forward when language enabled horizontal scaling (collaboration). Current AI development is focused on vertical scaling (creating bigger 'individual genius' models). The next frontier is distributed AI that can share intent, knowledge, and innovation, mimicking humanity's cognitive evolution.

Current AI development focuses on "vertical scaling" (bigger models), akin to early humans getting smarter individually. The real breakthrough, like humanity's invention of language, will come from "horizontal scaling"—enabling AI agents to share knowledge and collaborate.

The AI industry has focused on 'vertical scaling'—building bigger models with more parameters. Vijoy Pandey argues the untapped opportunity is in 'horizontal scaling.' This involves enabling teams of specialized agents to collaborate, creating a collective intelligence greater than any single model.

Unlike rivals building massive, centralized campuses, Google leverages its advanced proprietary fiber networks to train single AI models across multiple, smaller data centers. This provides greater flexibility in site selection and resource allocation, creating a durable competitive edge in AI infrastructure.