For specialized, high-stakes tasks like insurance underwriting, enterprises will favor smaller, on-prem models fine-tuned on proprietary data. These models can be faster, more accurate, and more secure than general-purpose frontier models, creating a lasting market for custom AI solutions.

Related Insights

Recognizing there is no single "best" LLM, AlphaSense built a system to test and deploy various models for different tasks. This allows them to optimize for performance and even stylistic preferences, using different models for their buy-side finance clients versus their corporate users.

The notion of building a business as a 'thin wrapper' around a foundational model like GPT is flawed. Truly defensible AI products, like Cursor, build numerous specific, fine-tuned models to deeply understand a user's domain. This creates a data and performance moat that a generic model cannot easily replicate, much like Salesforce was more than just a 'thin wrapper' on a database.

WCM avoids generic AI use cases. Instead, they've built a "research partner" AI model specifically tuned to codify and diagnose their core concepts of "moat trajectory" and "culture." This allows them to amplify their unique edge by systematically flagging changes across a vast universe of data, rather than just automating simple tasks.

Instead of a generalist AI, LinkedIn built a suite of specialized internal agents for tasks like trust reviews, growth analysis, and user research. These agents are trained on LinkedIn's unique historical data and playbooks, providing critiques and insights impossible for external tools.

The primary driver for fine-tuning isn't cost but necessity. When applications like real-time voice demand low latency, developers are forced to use smaller models. These models often lack quality for specific tasks, making fine-tuning a necessary step to achieve production-level performance.

Building a single, all-purpose AI is like hiring one person for every company role. To maximize accuracy and creativity, build multiple custom GPTs, each trained for a specific function like copywriting or operations, and have them collaborate.

Basic supervised fine-tuning (SFT) only adjusts a model's style. The real unlock for enterprises is reinforcement fine-tuning (RFT), which leverages proprietary datasets to create state-of-the-art models for specific, high-value tasks, moving beyond mere 'tone improvements.'

If a company and its competitor both ask a generic LLM for strategy, they'll get the same answer, erasing any edge. The only way to generate unique, defensible strategies is by building evolving models trained on a company's own private data.

To balance security with agility, enterprises should run two AI tracks. Let the CIO's office develop secure, custom models for sensitive data while simultaneously empowering business units like marketing to use approved, low-risk SaaS AI tools to maintain momentum and drive immediate value.

Instead of offering a model selector, creating a proprietary, branded model allows a company to chain different specialized models for various sub-tasks (e.g., search, generation). This not only improves overall performance but also provides business independence from the pricing and launch cycles of a single frontier model lab.