Fal strategically chose not to compete in LLM inference against giants like OpenAI and Google. Instead, they focused on the "net new market" of generative media (images, video), allowing them to become a leader in a fast-growing, less contested space.

Related Insights

When evaluating AI startups, don't just consider the current product landscape. Instead, visualize the future state of giants like OpenAI as multi-trillion dollar companies. Their "sphere of influence" will be vast. The best opportunities are "second-order" companies operating in niches these giants are unlikely to touch.

Startups like Cognition Labs find their edge not by competing on pre-training large models, but by mastering post-training. They build specialized reinforcement learning environments that teach models specific, real-world workflows (e.g., using Datadog for debugging), creating a defensible niche that larger players overlook.

While today's focus is on text-based LLMs, the true, defensible AI battleground will be in complex modalities like video. Generating video requires multiple interacting models and unique architectures, creating far greater potential for differentiation and a wider competitive moat than text-based interfaces, which will become commoditized.

The AI industry is not a winner-take-all market. Instead, it's a dynamic "leapfrogging" race where competitors like OpenAI, Google, and Anthropic constantly surpass each other with new models. This prevents a single monopoly and encourages specialization, with different models excelling in areas like coding or current events.

The history of AI tools shows that products launching with fewer restrictions to empower individual developers (e.g., Stable Diffusion) tend to capture mindshare and adoption faster than cautious, locked-down competitors (e.g., DALL-E). Early-stage velocity trumps enterprise-grade caution.

Fal maintains a performance edge by building a specialized just-in-time (JIT) compiler for diffusion models. This verticalized approach, inspired by PyTorch 2.0 but more focused, generates more efficient kernels than generalized tools, creating a defensible technical moat.

Large platforms focus on massive opportunities right in front of them ('gold bricks at their feet'). They consciously ignore even valuable markets that require more effort ('gold bricks 100 feet away'). This strategic neglect creates defensible spaces for startups in those niche areas.

If a company and its competitor both ask a generic LLM for strategy, they'll get the same answer, erasing any edge. The only way to generate unique, defensible strategies is by building evolving models trained on a company's own private data.

Conventional venture capital wisdom of 'winner-take-all' may not apply to AI applications. The market is expanding so rapidly that it can sustain multiple, fast-growing, highly valuable companies, each capturing a significant niche. For VCs, this means huge returns don't necessarily require backing a monopoly.

Investing in startups directly adjacent to OpenAI is risky, as they will inevitably build those features. A smarter strategy is backing "second-order effect" companies applying AI to niche, unsexy industries that are outside the core focus of top AI researchers.