A new insurance category, separate from cyber insurance, is launching to cover enterprise risks specific to generative AI. Backed by Lloyd's of London, this product uses US lawsuit data to underwrite liabilities such as copyright infringement and personal injury caused by AI systems, addressing a critical gap for companies deploying the technology.
The insurance industry acts as a powerful de facto regulator. As major insurers seek to exclude AI-related liabilities from policies, they could dramatically slow AI deployment because businesses will be unwilling to shoulder the unmitigated financial risk themselves.
Existing policies like cyber insurance don't explicitly mention AI, making coverage for AI-related harms unclear. This ambiguity means insurers carry unpriced risk, while companies lack certainty. This situation will likely force the creation of dedicated AI insurance products, much as cyber insurance emerged in the 2000s.
Insurers lack the historical loss data required to price novel AI risks. The solution is to use red teaming and systematic evaluations to create a large pool of "synthetic data" on how an AI product behaves and fails. This data on failure frequency and severity can be directly plugged into traditional actuarial models.
While foundation models carry systemic risk, AI applications make "thicker promises" to enterprises, like guaranteeing specific outcomes in customer support. This specificity creates more immediate and tangible business risks (e.g., brand disasters, financial errors), making the application layer the primary area where trust and insurance are needed now.
Organizations must urgently develop policies for AI agents, which take action on a user's behalf. This is not a future problem. Agents are already being integrated into common business tools like ChatGPT, Microsoft Copilot, and Salesforce, creating new risks that existing generative AI policies do not cover.
Insurers like AIG are seeking to exclude liabilities from AI use, such as deepfake scams or chatbot errors, from standard corporate policies. This forces businesses to either purchase expensive, capped add-ons or assume a significant new category of uninsurable risk.
AI companies argue their models' outputs are original creations to defend against copyright claims. This stance becomes a liability when the AI generates harmful material, as it positions the platform as a co-creator, undermining the Section 230 "neutral platform" defense used by traditional social media.
Without clear government standards for AI safety, there is no "safe harbor" from lawsuits. This makes it likely courts will apply strict liability, where a company is at fault even if not negligent. This legal uncertainty makes risk unquantifiable for insurers, forcing them to exit the market.
While an AI model itself may not be an infringement, its output could be. If you use AI-generated content for your business, you could face lawsuits from creators whose copyrighted material was used for training. The legal argument is that your output is a "derivative work" of their original, protected content.
The approach to AI safety isn't new; it mirrors historical solutions for managing technological risk. Just as Benjamin Franklin's 18th-century fire insurance company created building codes and inspections to reduce fires, a modern AI insurance market can drive the creation and adoption of safety standards and audits for AI agents.