Insurers can price a single large loss. What they cannot price is a single AI model, deployed by thousands of customers, having a flaw that leads to thousands of simultaneous claims. This "systemic, correlated" risk could bankrupt an insurer.
The insurance industry acts as a powerful de facto regulator. As major insurers seek to exclude AI-related liabilities from policies, they could dramatically slow AI deployment because businesses will be unwilling to shoulder the unmitigated financial risk themselves.
The rapid accumulation of hundreds of billions in debt to finance AI data centers poses a systemic threat, not just a risk to individual companies. A drop in GPU rental prices could trigger mass defaults as assets fail to service their loans, risking a contagion effect similar to the 2008 financial crisis.
Existing policies like cyber insurance don't explicitly mention AI, making coverage for AI-related harms unclear. This ambiguity means insurers carry unpriced risk, while companies lack certainty. This situation will likely force the creation of dedicated AI insurance products, much as cyber insurance emerged in the 2000s.
Insurers lack the historical loss data required to price novel AI risks. The solution is to use red teaming and systematic evaluations to create a large pool of "synthetic data" on how an AI product behaves and fails. This data on failure frequency and severity can be directly plugged into traditional actuarial models.
While foundation models carry systemic risk, AI applications make "thicker promises" to enterprises, like guaranteeing specific outcomes in customer support. This specificity creates more immediate and tangible business risks (e.g., brand disasters, financial errors), making the application layer the primary area where trust and insurance are needed now.
Drawing from the nuclear energy insurance model, the private market cannot effectively insure against massive AI tail risks. A better model involves the government capping liability (e.g., above $15B), creating a backstop that allows a private insurance market to flourish and provide crucial governance for more common risks.
Insurers like AIG are seeking to exclude liabilities from AI use, such as deepfake scams or chatbot errors, from standard corporate policies. This forces businesses to either purchase expensive, capped add-ons or assume a significant new category of uninsurable risk.
Without clear government standards for AI safety, there is no "safe harbor" from lawsuits. This makes it likely courts will apply strict liability, where a company is at fault even if not negligent. This legal uncertainty makes risk unquantifiable for insurers, forcing them to exit the market.
AI and big data give insurers increasingly precise information on individual risk. As they approach perfect prediction, the concept of insurance as risk-pooling breaks down. If an insurer knows your house will burn down and charges an equivalent premium, you're no longer insured; you're just pre-paying for a disaster.
The approach to AI safety isn't new; it mirrors historical solutions for managing technological risk. Just as Benjamin Franklin's 18th-century fire insurance company created building codes and inspections to reduce fires, a modern AI insurance market can drive the creation and adoption of safety standards and audits for AI agents.