Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Truly massive database companies only emerge every ~15 years when three conditions are met: a new ubiquitous workload (like AI), a new underlying storage architecture that predecessors can't adopt (like NVMe SSDs and S3), and a long-term roadmap to handle all possible data queries.

Related Insights

History shows that major technological shifts like the internet and AI require a fundamental re-architecting of everything from silicon and networking up to software. The industry repeatedly forgets this lesson, mistakenly declaring parts of the stack, like hardware, as commoditized right before the next wave hits.

Contrary to conventional wisdom, MongoDB's CEO reveals enterprise leaders have a surprising appetite for full system replacement. An AI-native company that can replace an entire legacy system of record—making it cheaper, faster, and better—will get a leader's attention far more effectively than one offering an incremental feature layer on top of an existing platform.

AI agents make it dramatically easier to extract and migrate data from platforms, reducing vendor lock-in. In response, platforms like Snowflake are embracing open file formats (e.g., Iceberg), shifting the competitive basis from data gravity to superior performance, cost, and features.

The long-sought goal of "information at your fingertips," envisioned by Bill Gates, wasn't achieved through structured databases as expected. Instead, large neural networks unexpectedly became the key, capable of finding patterns in messy, unstructured enterprise data where rigid schemas failed.

To build a multi-billion dollar database company, you need two things: a new, widespread workload (like AI needing data) and a fundamentally new storage architecture that incumbents can't easily adopt. This framework helps identify truly disruptive infrastructure opportunities.

Databricks is raising massive rounds to build an AI offering that rivals cloud giants like AWS. This shifts the primary competitive landscape from a focused battle with Snowflake to a broader war for the enterprise AI agent market, explaining their aggressive fundraising and strategy.

The common belief is that AI decisions are driven by compute hardware. However, NetApp's Keith Norbie argues the critical success factor is the underlying data platform. Since most enterprise data already resides on platforms like NetApp, preparing this data structure for training and deployment is more crucial than the choice of server.

The current moment is ripe for building new horizontal software giants due to three converging paradigm shifts: a move to outcome-based pricing, AI completing end-to-end tasks as the new unit of value, and a shift from structured schemas to dynamic, unstructured data models.

Dell's CTO identifies a new architectural component: the "knowledge layer" (vector DBs, knowledge graphs). Unlike traditional data architectures, this layer should be placed near the dynamic AI compute (e.g., on an edge device) rather than the static primary data, as it's perpetually hot and used in real-time.

Since 2022, AI has created a pivotal moment where the long-term value of existing software is being questioned by both investors and customers. MongoDB's CEO asserts that in this new stack, only two layers feel certain to endure: the foundational data layer where information is stored and the LLM layer that provides intelligence. Everything in between must now re-prove its value.