While today's focus is on text-based LLMs, the true, defensible AI battleground will be in complex modalities like video. Generating video requires multiple interacting models and unique architectures, creating far greater potential for differentiation and a wider competitive moat than text-based interfaces, which will become commoditized.
To build a durable business on top of foundation models, go beyond a simple API call. Gamma creates a moat by deeply owning an entire workflow (visual communication) and orchestrating over 20 different specialized AI models, each chosen for a specific sub-task in the user journey.
Don't view generative AI video as just a way to make traditional films more efficiently. Ben Horowitz sees it as a fundamentally new creative medium, much like movies were to theater. It enables entirely new forms of storytelling by making visuals that once required massive budgets accessible to anyone.
Fal strategically chose not to compete in LLM inference against giants like OpenAI and Google. Instead, they focused on the "net new market" of generative media (images, video), allowing them to become a leader in a fast-growing, less contested space.
The "AI wrapper" concern is mitigated by a multi-model strategy. A startup can integrate the best models from various providers for different tasks, creating a superior product. A platform like OpenAI is incentivized to only use its own models, creating a durable advantage for the startup.
Simply offering the latest model is no longer a competitive advantage. True value is created in the system built around the model—the system prompts, tools, and overall scaffolding. This 'harness' is what optimizes a model's performance for specific tasks and delivers a superior user experience.
The notion of building a business as a 'thin wrapper' around a foundational model like GPT is flawed. Truly defensible AI products, like Cursor, build numerous specific, fine-tuned models to deeply understand a user's domain. This creates a data and performance moat that a generic model cannot easily replicate, much like Salesforce was more than just a 'thin wrapper' on a database.
The enduring moat in the AI stack lies in what is hardest to replicate. Since building foundation models is significantly more difficult than building applications on top of them, the model layer is inherently more defensible and will naturally capture more value over time.
Current multimodal models shoehorn visual data into a 1D text-based sequence. True spatial intelligence is different. It requires a native 3D/4D representation to understand a world governed by physics, not just human-generated language. This is a foundational architectural shift, not an extension of LLMs.
If a company and its competitor both ask a generic LLM for strategy, they'll get the same answer, erasing any edge. The only way to generate unique, defensible strategies is by building evolving models trained on a company's own private data.
Human intelligence is multifaceted. While LLMs excel at linguistic intelligence, they lack spatial intelligence—the ability to understand, reason, and interact within a 3D world. This capability, crucial for tasks from robotics to scientific discovery, is the focus for the next wave of AI models.