AI and big data give insurers increasingly precise information on individual risk. As they approach perfect prediction, the concept of insurance as risk-pooling breaks down. If an insurer knows your house will burn down and charges an equivalent premium, you're no longer insured; you're just pre-paying for a disaster.
The common analogy of AI to electricity is dangerously rosy. AI is more like fire: a transformative tool that, if mismanaged or weaponized, can spread uncontrollably with devastating consequences. This mental model better prepares us for AI's inherent risks and accelerating power.
Max Levchin claims any single data point that seems to dramatically improve underwriting accuracy is a red herring. He argues these 'magic bullets' are brittle and fail when market conditions shift. A robust risk model instead relies on aggregating small lifts from many subtle factors.
AI's core strength is hyper-sophisticated pattern recognition. If your daily tasks—from filing insurance claims to diagnosing patients—can be broken down into a data set of repeatable patterns, AI can learn to perform them faster and more accurately than a human.
For specialized, high-stakes tasks like insurance underwriting, enterprises will favor smaller, on-prem models fine-tuned on proprietary data. These models can be faster, more accurate, and more secure than general-purpose frontier models, creating a lasting market for custom AI solutions.
As AI models are used for critical decisions in finance and law, black-box empirical testing will become insufficient. Mechanistic interpretability, which analyzes model weights to understand reasoning, is a bet that society and regulators will require explainable AI, making it a crucial future technology.
For current AI valuations to be realized, AI must deliver unprecedented efficiency, likely causing mass job displacement. This would disrupt the consumer economy that supports these companies, creating a fundamental contradiction where the condition for success undermines the system itself.
Instead of replacing humans, Aviva uses AI to anticipate *why* a customer is calling about a claim. The agent receives this prediction and relevant data upfront, skipping lengthy verification and improving the customer experience.
Insurers like AIG are seeking to exclude liabilities from AI use, such as deepfake scams or chatbot errors, from standard corporate policies. This forces businesses to either purchase expensive, capped add-ons or assume a significant new category of uninsurable risk.
Insurers like Aviva are finding it increasingly difficult to price risk for predictable climate-related catastrophes, such as houses repeatedly built on known floodplains. The near-inevitability of these events makes them uninsurable, prompting the creation of hybrid government-backed schemes where the private market can no longer operate.
Technological advancement, particularly in AI, moves faster than legal and social frameworks can adapt. This creates 'lawless spaces,' akin to the Wild West, where powerful new capabilities exist without clear rules or recourse for those negatively affected. This leaves individuals vulnerable to algorithmic decisions about jobs, loans, and more.