The inconsistency and 'laziness' of base LLMs is a major hurdle. The best application-layer companies differentiate themselves not by just wrapping a model, but by building a complex harness that ensures the right amount of intelligence is reliably applied to a specific user task, creating a defensible product.

Related Insights

Simply offering the latest model is no longer a competitive advantage. True value is created in the system built around the model—the system prompts, tools, and overall scaffolding. This 'harness' is what optimizes a model's performance for specific tasks and delivers a superior user experience.

The notion of building a business as a 'thin wrapper' around a foundational model like GPT is flawed. Truly defensible AI products, like Cursor, build numerous specific, fine-tuned models to deeply understand a user's domain. This creates a data and performance moat that a generic model cannot easily replicate, much like Salesforce was more than just a 'thin wrapper' on a database.

The key for enterprises isn't integrating general AI like ChatGPT but creating "proprietary intelligence." This involves fine-tuning smaller, custom models on their unique internal data and workflows, creating a competitive moat that off-the-shelf solutions cannot replicate.

While many new AI tools excel at generating prototypes, a significant gap remains to make them production-ready. The key business opportunity and competitive moat lie in closing this gap—turning a generated concept into a full-stack, on-brand, deployable application. This is the 'last mile' problem.

The best application-focused AI companies are born from a need to solve a hard research problem to deliver a superior user experience. This "application-pull" approach, seen in companies like Harvey (RAG) and Runway (models), creates a stronger moat than pursuing research for its own sake.

In a world where AI implementation is becoming cheaper, the real competitive advantage isn't speed or features. It's the accumulated knowledge gained through the difficult, iterative process of building and learning. This "pain" of figuring out what truly works for a specific problem becomes a durable moat.

Unlike sticky cloud infrastructure (AWS, GCP), LLMs are easily interchangeable via APIs, leading to customer "promiscuity." This commoditizes the model layer and forces providers like OpenAI to build defensible moats at the application layer (e.g., ChatGPT) where they can own the end user.

The enduring moat in the AI stack lies in what is hardest to replicate. Since building foundation models is significantly more difficult than building applications on top of them, the model layer is inherently more defensible and will naturally capture more value over time.

The common critique of AI application companies as "GPT wrappers" with no moat is proving false. The best startups are evolving beyond using a single third-party model. They are using dozens of models and, crucially, are backward-integrating to build their own custom AI models optimized for their specific domain.

In enterprise AI, competitive advantage comes less from the underlying model and more from the surrounding software. Features like versioning, analytics, integrations, and orchestration systems are critical for enterprise adoption and create stickiness that models alone cannot.