We scan new podcasts and send you the top 5 insights daily.
Unlike traditional business insurance, AI risk isn't tied to a company's revenue. A small startup deploying a hiring tool at a single Fortune 500 company can have a much larger liability exposure than a bigger company with a low-risk internal AI. Pricing must reflect this deployment-specific risk profile.
The insurance industry acts as a powerful de facto regulator. As major insurers seek to exclude AI-related liabilities from policies, they could dramatically slow AI deployment because businesses will be unwilling to shoulder the unmitigated financial risk themselves.
Existing policies like cyber insurance don't explicitly mention AI, making coverage for AI-related harms unclear. This ambiguity means insurers carry unpriced risk, while companies lack certainty. This situation will likely force the creation of dedicated AI insurance products, much as cyber insurance emerged in the 2000s.
Insurers lack the historical loss data required to price novel AI risks. The solution is to use red teaming and systematic evaluations to create a large pool of "synthetic data" on how an AI product behaves and fails. This data on failure frequency and severity can be directly plugged into traditional actuarial models.
AI insurance faces a cold-start problem: no historical claims data for actuarial modeling. To solve this, insurers underwrite using proxy signals. They assess a company's AI governance maturity and conduct technical evaluations of the AI system’s performance, robustness, and safety to quantify risk.
While foundation models carry systemic risk, AI applications make "thicker promises" to enterprises, like guaranteeing specific outcomes in customer support. This specificity creates more immediate and tangible business risks (e.g., brand disasters, financial errors), making the application layer the primary area where trust and insurance are needed now.
Insurance for AI doesn't target general models like ChatGPT. Instead, it insures customized AI systems—fine-tuned models with guardrails—deployed for a specific business purpose, such as a predictive maintenance tool or an HR application. The insured asset is the final, deployed AI-powered product, not the underlying model.
A new insurance category, separate from cyber insurance, is launching to cover enterprise risks specific to generative AI. Backed by Lloyd's of London, this product uses US lawsuit data to underwrite liabilities such as copyright infringement and personal injury caused by AI systems, addressing a critical gap for companies deploying the technology.
Existing policies like cyber insurance fail to cover AI not just because of ambiguous wording, but because their underwriting processes historically never assessed AI-specific risks. Underwriters never asked about AI systems, governance, or testing, meaning the risk was never properly assessed, priced, or intentionally covered.
Insurers like AIG are seeking to exclude liabilities from AI use, such as deepfake scams or chatbot errors, from standard corporate policies. This forces businesses to either purchase expensive, capped add-ons or assume a significant new category of uninsurable risk.
Insurers can price a single large loss. What they cannot price is a single AI model, deployed by thousands of customers, having a flaw that leads to thousands of simultaneous claims. This "systemic, correlated" risk could bankrupt an insurer.