Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Mission-critical industries like finance and drug discovery are hesitant to use major LLMs because they don't want to share proprietary data with a 'big brain for all.' This creates a significant B2B market gap for custom, private AI models that can be tailored to specific tasks and datasets without compromising privacy or security.

Related Insights

For specialized, high-stakes tasks like insurance underwriting, enterprises will favor smaller, on-prem models fine-tuned on proprietary data. These models can be faster, more accurate, and more secure than general-purpose frontier models, creating a lasting market for custom AI solutions.

The key for enterprises isn't integrating general AI like ChatGPT but creating "proprietary intelligence." This involves fine-tuning smaller, custom models on their unique internal data and workflows, creating a competitive moat that off-the-shelf solutions cannot replicate.

To avoid compliance and security risks, companies in sectors like healthcare and fintech don't use public LLMs. Instead, they leverage tools like Dashworks to build AI chatbots on their internal documentation and provide developers with secure, IDE-integrated tools like Cursor.

Microsoft's case management AI avoids training directly on private customer data. Instead, it operates on a "bring your own knowledge" model, using only the knowledge articles and resources explicitly provided by the customer. This approach sidesteps major privacy and data governance concerns common in enterprise AI adoption.

Despite public hype around powerful consumer AI, many product managers in large companies are forbidden from using them. Strict IT constraints against uploading internal documents to external tools create a significant barrier, slowing adoption until secure, sandboxed enterprise solutions are implemented.

The "agentic revolution" will be powered by small, specialized models. Businesses and public sector agencies don't need a cloud-based AI that can do 1,000 tasks; they need an on-premise model fine-tuned for 10-20 specific use cases, driven by cost, privacy, and control requirements.

The vast majority of valuable data resides within private enterprises, unseen by foundation models. Companies can leverage this private data through continuous fine-tuning to create specialized, high-performing models, establishing a competitive advantage that API-based competitors cannot replicate.

Ali Ghodsi argues that while public LLMs are a commodity, the true value for enterprises is applying AI to their private data. This is impossible without first building a modern data foundation that allows the AI to securely and effectively access and reason on that information.

If a company and its competitor both ask a generic LLM for strategy, they'll get the same answer, erasing any edge. The only way to generate unique, defensible strategies is by building evolving models trained on a company's own private data.

Companies are becoming wary of feeding their unique data and customer queries into third-party LLMs like ChatGPT. The fear is that this trains a potential future competitor. The trend will shift towards running private, open-source models on their own cloud instances to maintain a competitive moat and ensure data privacy.