Despite a decade of industry focus on technologies like Kubernetes, the vast majority of software still runs on older platforms like Virtual Machines. Production technology has incredible inertia, staying in use for decades longer than people expect. This means infrastructure products must address the 'old' world, not just the new and hyped.
The evolution from physical servers to virtualization and containers adds layers of abstraction. These layers don't make the lower levels obsolete; they create a richer stack with more places to innovate and add value. Whether it's developer tools at the top or kernel optimization at the bottom, each layer presents a distinct business opportunity.
To build a multi-billion dollar database company, you need two things: a new, widespread workload (like AI needing data) and a fundamentally new storage architecture that incumbents can't easily adopt. This framework helps identify truly disruptive infrastructure opportunities.
While network effects drive consolidation in tech, a powerful counter-force prevents monopolies. Large enterprise customers intentionally support multiple major players (e.g., AWS, GCP, Azure) to avoid vendor lock-in and maintain negotiating power, naturally creating a market with two to three leaders.
The common belief is that AI decisions are driven by compute hardware. However, NetApp's Keith Norbie argues the critical success factor is the underlying data platform. Since most enterprise data already resides on platforms like NetApp, preparing this data structure for training and deployment is more crucial than the choice of server.
To operate thousands of GPUs across multiple clouds and data centers, Fal found Kubernetes insufficient. They had to build their own proprietary stack, including a custom orchestration layer, distributed file system, and container runtimes to achieve the necessary performance and scale.
Don't try to compete with hyperscalers like AWS or GCP on their home turf. Instead, differentiate by focusing on areas they inherently neglect, such as multi-cloud management and hybrid on-premise integration. The winning strategy is to fit into and augment a customer's existing cloud strategy, not attempt to replace it.
While modern UIs are essential, the backend IBM i (AS/400) platform remains entrenched in many businesses. The reason is its extreme reliability and stability, which would require massive, difficult, and expensive custom software development to achieve on open systems like Linux.
The primary reason multi-million dollar AI initiatives stall or fail is not the sophistication of the models, but the underlying data layer. Traditional data infrastructure creates delays in moving and duplicating information, preventing the real-time, comprehensive data access required for AI to deliver business value. The focus on algorithms misses this foundational roadblock.
The rise of public cloud was driven by a business model innovation as much as a technological one. The core battle was between owning infrastructure (capex) and renting it (opex) with fractional consumption. This shift in how customers consume and pay for services was the key disruption.
The excitement around AI capabilities often masks the real hurdle to enterprise adoption: infrastructure. Success is not determined by the model's sophistication, but by first solving foundational problems of security, cost control, and data integration. This requires a shift from an application-centric to an infrastructure-first mindset.