Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

In high-stakes, regulated sectors like insurance, the risk of GenAI hallucination is too great for customer-facing tools. The guest's company, SelectQuote, successfully shifted its AI focus from generative IVRs to internal applications like agent training for sales objections, minimizing compliance risks.

Related Insights

Customers are hesitant to trust a black-box AI with critical operations. The winning business model is to sell a complete outcome or service, using AI internally for a massive efficiency advantage while keeping humans in the loop for quality and trust.

To introduce AI into a high-risk environment like legal tech, begin with tasks that don't involve sensitive data, such as automating marketing copy. This approach proves AI's value and builds internal trust, paving the way for future, higher-stakes applications like reviewing client documents.

To mitigate risks like AI hallucinations and high operational costs, enterprises should first deploy new AI tools internally to support human agents. This "agent-assist" model allows for monitoring, testing, and refinement in a controlled environment before exposing the technology directly to customers.

Instead of replacing humans, Aviva uses AI to anticipate *why* a customer is calling about a claim. The agent receives this prediction and relevant data upfront, skipping lengthy verification and improving the customer experience.

For companies given a broad "AI mandate," the most tactical and immediate starting point is to create a private, internalized version of a large language model like ChatGPT. This provides a quick win by enabling employees to leverage generative AI for productivity without exposing sensitive intellectual property or code to public models.

In sectors like finance or healthcare, bypass initial regulatory hurdles by implementing AI on non-sensitive, public information, such as analyzing a company podcast. This builds momentum and demonstrates value while more complex, high-risk applications are vetted by legal and IT teams.

To prevent AI agents from over-promising or inventing features, you must explicitly define negative constraints. Just as you train them on your capabilities, provide clear boundaries on what your product or service does not do to stop them from making things up to be helpful.

Prioritize using AI to support human agents internally. A co-pilot model equips agents with instant, accurate information, enabling them to resolve complex issues faster and provide a more natural, less-scripted customer experience.

For high-stakes operations like changing a flight, any AI hallucination is a catastrophic failure. This necessity for 100% accuracy in a complex vertical like travel forced Navan to build its own proprietary, agentic AI platform rather than relying on external models which could result in customer loss and lawsuits.

Contrary to belief, regulated sectors like finance and healthcare are early adopters of voice AI. This is because AI can be programmed for perfect compliance and offer a verifiable audit trail, outperforming human agents who are prone to error and harder to track.

Regulated Industries Should Use GenAI for Internal Training, Not Customer-Facing IVRs | RiffOn