Fal's revenue growth wasn't gradual but occurred in massive leaps tied to specific model releases. SDXL drove their first million in revenue, while Black Forest Labs' Flux models catapulted them from $2M to $10M in revenue in a single month.
Fal strategically chose not to compete in LLM inference against giants like OpenAI and Google. Instead, they focused on the "net new market" of generative media (images, video), allowing them to become a leader in a fast-growing, less contested space.
Vercel's CTO Malte Ubl outlines a third way for open source monetization beyond support (Red Hat) or open-core models. Vercel creates truly open libraries to grow the entire ecosystem. They find that as the overall "pie" grows, their relative slice remains constant, leading to absolute revenue growth.
The key to successful open-source AI isn't uniting everyone into a massive project. Instead, EleutherAI's model proves more effective: creating small, siloed teams with guaranteed compute and end-to-end funding for a single, specific research problem. This avoids organizational overhead and ensures completion.
The history of AI tools shows that products launching with fewer restrictions to empower individual developers (e.g., Stable Diffusion) tend to capture mindshare and adoption faster than cautious, locked-down competitors (e.g., DALL-E). Early-stage velocity trumps enterprise-grade caution.
For consumer products like ChatGPT, models are already good enough for common queries. However, for complex enterprise tasks like coding, performance is far from solved. This gives model providers a durable path to sustained revenue growth through continued quality improvements aimed at professionals.
OpenPipe's initial value was clear: GPT-4 was powerful but prohibitively expensive for production. They offered a managed flow to distill expensive workflows into cheaper, smaller models, resonating with early customers facing massive OpenAI bills and helping them reach $1M ARR in eight months.
Fal maintains a performance edge by building a specialized just-in-time (JIT) compiler for diffusion models. This verticalized approach, inspired by PyTorch 2.0 but more focused, generates more efficient kernels than generalized tools, creating a defensible technical moat.
OpenAI has seen no cannibalization from its open source model releases. The use cases, customer profiles, and immense difficulty of operating inference at scale create a natural separation. Open source serves different needs and helps grow the entire AI ecosystem, which benefits the platform leader.
An emerging AI growth strategy involves using expensive frontier models to acquire users and distribution at an explosive rate, accepting poor initial margins. Once critical mass is reached, the company introduces its own fine-tuned, cheaper model, drastically improving unit economics overnight and capitalizing on the established user base.
Instead of offering a model selector, creating a proprietary, branded model allows a company to chain different specialized models for various sub-tasks (e.g., search, generation). This not only improves overall performance but also provides business independence from the pricing and launch cycles of a single frontier model lab.