We scan new podcasts and send you the top 5 insights daily.
Existing policies like cyber insurance fail to cover AI not just because of ambiguous wording, but because their underwriting processes historically never assessed AI-specific risks. Underwriters never asked about AI systems, governance, or testing, meaning the risk was never properly assessed, priced, or intentionally covered.
Unlike traditional business insurance, AI risk isn't tied to a company's revenue. A small startup deploying a hiring tool at a single Fortune 500 company can have a much larger liability exposure than a bigger company with a low-risk internal AI. Pricing must reflect this deployment-specific risk profile.
Unlike static assets, AI systems are highly dynamic. To manage this risk, AI insurers are introducing "continuing duties" for policyholders, such as mandatory monitoring and reporting on any material changes to the AI system. This shifts the industry away from a static annual review toward continuous underwriting.
The insurance industry acts as a powerful de facto regulator. As major insurers seek to exclude AI-related liabilities from policies, they could dramatically slow AI deployment because businesses will be unwilling to shoulder the unmitigated financial risk themselves.
Existing policies like cyber insurance don't explicitly mention AI, making coverage for AI-related harms unclear. This ambiguity means insurers carry unpriced risk, while companies lack certainty. This situation will likely force the creation of dedicated AI insurance products, much as cyber insurance emerged in the 2000s.
Insurers lack the historical loss data required to price novel AI risks. The solution is to use red teaming and systematic evaluations to create a large pool of "synthetic data" on how an AI product behaves and fails. This data on failure frequency and severity can be directly plugged into traditional actuarial models.
AI insurance faces a cold-start problem: no historical claims data for actuarial modeling. To solve this, insurers underwrite using proxy signals. They assess a company's AI governance maturity and conduct technical evaluations of the AI system’s performance, robustness, and safety to quantify risk.
A new insurance category, separate from cyber insurance, is launching to cover enterprise risks specific to generative AI. Backed by Lloyd's of London, this product uses US lawsuit data to underwrite liabilities such as copyright infringement and personal injury caused by AI systems, addressing a critical gap for companies deploying the technology.
Insurers like AIG are seeking to exclude liabilities from AI use, such as deepfake scams or chatbot errors, from standard corporate policies. This forces businesses to either purchase expensive, capped add-ons or assume a significant new category of uninsurable risk.
Insurers can price a single large loss. What they cannot price is a single AI model, deployed by thousands of customers, having a flaw that leads to thousands of simultaneous claims. This "systemic, correlated" risk could bankrupt an insurer.
AI and big data give insurers increasingly precise information on individual risk. As they approach perfect prediction, the concept of insurance as risk-pooling breaks down. If an insurer knows your house will burn down and charges an equivalent premium, you're no longer insured; you're just pre-paying for a disaster.