AI insurance faces a cold-start problem: no historical claims data for actuarial modeling. To solve this, insurers underwrite using proxy signals. They assess a company's AI governance maturity and conduct technical evaluations of the AI system’s performance, robustness, and safety to quantify risk.
Insurance for AI doesn't target general models like ChatGPT. Instead, it insures customized AI systems—fine-tuned models with guardrails—deployed for a specific business purpose, such as a predictive maintenance tool or an HR application. The insured asset is the final, deployed AI-powered product, not the underlying model.
Unlike static assets, AI systems are highly dynamic. To manage this risk, AI insurers are introducing "continuing duties" for policyholders, such as mandatory monitoring and reporting on any material changes to the AI system. This shifts the industry away from a static annual review toward continuous underwriting.
Existing policies like cyber insurance fail to cover AI not just because of ambiguous wording, but because their underwriting processes historically never assessed AI-specific risks. Underwriters never asked about AI systems, governance, or testing, meaning the risk was never properly assessed, priced, or intentionally covered.
Unlike traditional business insurance, AI risk isn't tied to a company's revenue. A small startup deploying a hiring tool at a single Fortune 500 company can have a much larger liability exposure than a bigger company with a low-risk internal AI. Pricing must reflect this deployment-specific risk profile.
