Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

To avoid compliance and security risks, companies in sectors like healthcare and fintech don't use public LLMs. Instead, they leverage tools like Dashworks to build AI chatbots on their internal documentation and provide developers with secure, IDE-integrated tools like Cursor.

Related Insights

To overcome security and data privacy hurdles in finance and healthcare, Genesis deploys its platform directly within the client's environment, not as a SaaS. This ensures accumulated institutional knowledge becomes a secure, company-owned asset, which is critical for adoption in regulated industries.

The key for enterprises isn't integrating general AI like ChatGPT but creating "proprietary intelligence." This involves fine-tuning smaller, custom models on their unique internal data and workflows, creating a competitive moat that off-the-shelf solutions cannot replicate.

Microsoft's case management AI avoids training directly on private customer data. Instead, it operates on a "bring your own knowledge" model, using only the knowledge articles and resources explicitly provided by the customer. This approach sidesteps major privacy and data governance concerns common in enterprise AI adoption.

Strict regulations prohibit sending sensitive data to external APIs, creating a compliance nightmare for cloud-based AI. Small, on-premise models solve this by keeping data within the enterprise boundary, eliminating third-party processor risks and simplifying audits for regulated industries like healthcare and finance.

In high-stakes, regulated sectors like insurance, the risk of GenAI hallucination is too great for customer-facing tools. The guest's company, SelectQuote, successfully shifted its AI focus from generative IVRs to internal applications like agent training for sales objections, minimizing compliance risks.

Despite public hype around powerful consumer AI, many product managers in large companies are forbidden from using them. Strict IT constraints against uploading internal documents to external tools create a significant barrier, slowing adoption until secure, sandboxed enterprise solutions are implemented.

For companies given a broad "AI mandate," the most tactical and immediate starting point is to create a private, internalized version of a large language model like ChatGPT. This provides a quick win by enabling employees to leverage generative AI for productivity without exposing sensitive intellectual property or code to public models.

In sectors like finance or healthcare, bypass initial regulatory hurdles by implementing AI on non-sensitive, public information, such as analyzing a company podcast. This builds momentum and demonstrates value while more complex, high-risk applications are vetted by legal and IT teams.

Ali Ghodsi argues that while public LLMs are a commodity, the true value for enterprises is applying AI to their private data. This is impossible without first building a modern data foundation that allows the AI to securely and effectively access and reason on that information.

Companies are becoming wary of feeding their unique data and customer queries into third-party LLMs like ChatGPT. The fear is that this trains a potential future competitor. The trend will shift towards running private, open-source models on their own cloud instances to maintain a competitive moat and ensure data privacy.