Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

AI insurance faces a cold-start problem: no historical claims data for actuarial modeling. To solve this, insurers underwrite using proxy signals. They assess a company's AI governance maturity and conduct technical evaluations of the AI system’s performance, robustness, and safety to quantify risk.

Related Insights

Unlike traditional business insurance, AI risk isn't tied to a company's revenue. A small startup deploying a hiring tool at a single Fortune 500 company can have a much larger liability exposure than a bigger company with a low-risk internal AI. Pricing must reflect this deployment-specific risk profile.

Unlike static assets, AI systems are highly dynamic. To manage this risk, AI insurers are introducing "continuing duties" for policyholders, such as mandatory monitoring and reporting on any material changes to the AI system. This shifts the industry away from a static annual review toward continuous underwriting.

AI audits are not a one-time, "risk-free" certification but an iterative process with quarterly re-audits. They quantify risk by finding vulnerabilities (which can initially have failure rates as high as 25%) and then measuring the improvement—often a 90% drop—after safeguards are implemented, giving enterprises a data-driven basis for trust.

AI system auditing will evolve from today's manual, interview-based process to one where auditors use APIs to verify controls in a machine-readable way. This shift from 90% manual to 90% automated will enable more accurate, data-driven risk assessment for AI insurance products.

Insurers lack the historical loss data required to price novel AI risks. The solution is to use red teaming and systematic evaluations to create a large pool of "synthetic data" on how an AI product behaves and fails. This data on failure frequency and severity can be directly plugged into traditional actuarial models.

The model combines insurance (financial protection), standards (best practices), and audits (verification). Insurers fund robust standards, while enterprises comply to get cheaper insurance. This market mechanism aligns incentives for both rapid AI adoption and robust security, treating them as mutually reinforcing rather than a trade-off.

Insurance for AI doesn't target general models like ChatGPT. Instead, it insures customized AI systems—fine-tuned models with guardrails—deployed for a specific business purpose, such as a predictive maintenance tool or an HR application. The insured asset is the final, deployed AI-powered product, not the underlying model.

A new insurance category, separate from cyber insurance, is launching to cover enterprise risks specific to generative AI. Backed by Lloyd's of London, this product uses US lawsuit data to underwrite liabilities such as copyright infringement and personal injury caused by AI systems, addressing a critical gap for companies deploying the technology.

Existing policies like cyber insurance fail to cover AI not just because of ambiguous wording, but because their underwriting processes historically never assessed AI-specific risks. Underwriters never asked about AI systems, governance, or testing, meaning the risk was never properly assessed, priced, or intentionally covered.

The approach to AI safety isn't new; it mirrors historical solutions for managing technological risk. Just as Benjamin Franklin's 18th-century fire insurance company created building codes and inspections to reduce fires, a modern AI insurance market can drive the creation and adoption of safety standards and audits for AI agents.