We scan new podcasts and send you the top 5 insights daily.
Lloyd Blankfein notes that technology's leverage creates unprecedented risk. A single software bug can cause billions in losses instantly. This is a new class of risk, analogous to the difference between a traditional industrial accident (Bhopal) and a nuclear meltdown (Fukushima).
Unlike prior tech revolutions funded mainly by equity, the AI infrastructure build-out is increasingly reliant on debt. This blurs the line between speculative growth capital (equity) and financing for predictable cash flows (debt), magnifying potential losses and increasing systemic failure risk if the AI boom falters.
The SVB crisis wasn't a traditional bank run caused by bad loans. It was the first instance where the speed of the internet and digital fund transfers outpaced regulatory reaction, turning a manageable asset-liability mismatch into a systemic crisis. This highlights a new type of technological 'tail risk' for modern banking.
Jones warns that the standard tech development model ('build, break, iterate') is catastrophic when applied to AI. Unlike other technologies, AI's tail risk could involve billions of lives, yet there is zero risk management, a sharp contrast to the discipline required in financial markets.
According to Andrew Ross Sorkin, while bad actors and speculation are always present, the single element that transforms a market downturn into a systemic financial crisis is excessive leverage. Without it, the system can absorb shocks; with it, a domino effect is inevitable, making guardrails against leverage paramount.
Widespread credit is the common accelerant in major financial crashes, from 1929's margin loans to 2008's subprime mortgages. This same leverage that fuels rapid growth is also the "match that lights the fire" for catastrophic downturns, with today's AI ecosystem showing similar signs.
Even if malicious actors are rare, technology exponentially increases the "amplitude" or scale of damage a single person can cause. Simultaneously, our ability to control individuals is decreasing. This creates a dangerous asymmetry where one person can cause catastrophic harm.
Blankfein believes the biggest technological threat isn't a sophisticated cyberattack but a simple human mistake amplified by technological leverage. He warns that adding more layers of checks can create complacency, paradoxically making such an error more likely to slip through.
The systemic risk from a major AI company failing isn't the loss of its technology. It's the potential for its debt default to cascade through an opaque network of private credit and other lenders, triggering a financial crisis.
During crises, Blankfein’s team ignored predictions about likely outcomes. Instead, they focused exclusively on identifying all possible (even low-probability) negative events and creating contingency plans. This readiness allowed them to react faster than competitors when a tail risk event actually occurred.
Insurers can price a single large loss. What they cannot price is a single AI model, deployed by thousands of customers, having a flaw that leads to thousands of simultaneous claims. This "systemic, correlated" risk could bankrupt an insurer.