Legal AI startup Sandstone's approach shows that the model is a commodity. Real defensibility comes from creating a "context layer" that integrates data from CRM, CLM, and communications, giving the AI the business context required to be truly useful for in-house teams.

Related Insights

The notion of building a business as a 'thin wrapper' around a foundational model like GPT is flawed. Truly defensible AI products, like Cursor, build numerous specific, fine-tuned models to deeply understand a user's domain. This creates a data and performance moat that a generic model cannot easily replicate, much like Salesforce was more than just a 'thin wrapper' on a database.

As AI commoditizes user interfaces, enduring value will reside in the backend systems that are the authoritative source of data (e.g., payroll, financial records). These 'systems of record' are sticky due to regulation, business process integration, and high switching costs.

A key competitive advantage for AI companies lies in capturing proprietary outcomes data by owning a customer's end-to-end workflow. This data, such as which legal cases are won or lost, is not publicly available. It creates a powerful feedback loop where the AI gets smarter at predicting valuable outcomes, a moat that general models cannot replicate.

Since LLMs are commodities, sustainable competitive advantage in AI comes from leveraging proprietary data and unique business processes that competitors cannot replicate. Companies must focus on building AI that understands their specific "secret sauce."

As AI makes building software features trivial, the sustainable competitive advantage shifts to data. A true data moat uses proprietary customer interaction data to train AI models, creating a feedback loop that continuously improves the product faster than competitors.

Successful vertical AI applications serve as a critical intermediary between powerful foundation models and specific industries like healthcare or legal. Their core value lies in being a "translation and transformation layer," adapting generic AI capabilities to solve nuanced, industry-specific problems for large enterprises.

Creating a basic AI coding tool is easy. The defensible moat comes from building a vertically integrated platform with its own backend infrastructure like databases, user management, and integrations. This is extremely difficult for competitors to replicate, especially if they rely on third-party services like Superbase.

While many legal AI tools use the same foundational models, they differentiate by offering features crucial for law firms: strict permissions, compliance controls, and integrations with proprietary legal databases like Westlaw. This 'packaging' of trust is the real product, for which discerning law firms willingly pay a premium.

Contrary to early narratives, a proprietary dataset is not the primary moat for AI applications. True, lasting defensibility is built by deeply integrating into an industry's ecosystem—connecting different stakeholders, leveraging strategic partnerships, and using funding velocity to build the broadest product suite.

An AI app that is merely a wrapper around a foundation model is at high risk of being absorbed by the model provider. True defensibility comes from integrating AI with proprietary data and workflows to become an indispensable enterprise system of record, like an HR or CRM system.

Vertical AI's Moat Is the "Context Layer" That Unifies Disparate Business Data | RiffOn