Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Insurance for AI doesn't target general models like ChatGPT. Instead, it insures customized AI systems—fine-tuned models with guardrails—deployed for a specific business purpose, such as a predictive maintenance tool or an HR application. The insured asset is the final, deployed AI-powered product, not the underlying model.

Related Insights

Unlike traditional business insurance, AI risk isn't tied to a company's revenue. A small startup deploying a hiring tool at a single Fortune 500 company can have a much larger liability exposure than a bigger company with a low-risk internal AI. Pricing must reflect this deployment-specific risk profile.

For specialized, high-stakes tasks like insurance underwriting, enterprises will favor smaller, on-prem models fine-tuned on proprietary data. These models can be faster, more accurate, and more secure than general-purpose frontier models, creating a lasting market for custom AI solutions.

The notion of building a business as a 'thin wrapper' around a foundational model like GPT is flawed. Truly defensible AI products, like Cursor, build numerous specific, fine-tuned models to deeply understand a user's domain. This creates a data and performance moat that a generic model cannot easily replicate, much like Salesforce was more than just a 'thin wrapper' on a database.

The key for enterprises isn't integrating general AI like ChatGPT but creating "proprietary intelligence." This involves fine-tuning smaller, custom models on their unique internal data and workflows, creating a competitive moat that off-the-shelf solutions cannot replicate.

Universal safety filters for "bad content" are insufficient. True AI safety requires defining permissible and non-permissible behaviors specific to the application's unique context, such as a banking use case versus a customer service setting. This moves beyond generic harm categories to business-specific rules.

While a general-purpose model like Llama can serve many businesses, their safety policies are unique. A company might want to block mentions of competitors or enforce industry-specific compliance—use cases model creators cannot pre-program. This highlights the need for a customizable safety layer separate from the base model.

While foundation models carry systemic risk, AI applications make "thicker promises" to enterprises, like guaranteeing specific outcomes in customer support. This specificity creates more immediate and tangible business risks (e.g., brand disasters, financial errors), making the application layer the primary area where trust and insurance are needed now.

The "agentic revolution" will be powered by small, specialized models. Businesses and public sector agencies don't need a cloud-based AI that can do 1,000 tasks; they need an on-premise model fine-tuned for 10-20 specific use cases, driven by cost, privacy, and control requirements.

A new insurance category, separate from cyber insurance, is launching to cover enterprise risks specific to generative AI. Backed by Lloyd's of London, this product uses US lawsuit data to underwrite liabilities such as copyright infringement and personal injury caused by AI systems, addressing a critical gap for companies deploying the technology.

Insurers like AIG are seeking to exclude liabilities from AI use, such as deepfake scams or chatbot errors, from standard corporate policies. This forces businesses to either purchase expensive, capped add-ons or assume a significant new category of uninsurable risk.