Our brains evolved for a world of linear change, not exponential curves. This cognitive blind spot leads to underestimating threats like viruses and opportunities like compounding, as we tend to perceive exponential growth as linear in the short term.
The common analogy of AI to electricity is dangerously rosy. AI is more like fire: a transformative tool that, if mismanaged or weaponized, can spread uncontrollably with devastating consequences. This mental model better prepares us for AI's inherent risks and accelerating power.
Coined in 1965, the "intelligence explosion" describes a runaway feedback loop. An AI capable of conducting AI research could use its intelligence to improve itself. This newly enhanced intelligence would make it even better at AI research, leading to exponential, uncontrollable growth in capability. This "fast takeoff" could leave humanity far behind in a very short period.
Leaders often conflate seeing a risk with understanding it. In 2020, officials saw COVID-19 but didn't understand its airborne spread. Conversely, society understands the risk of drunk driving but fails to see it most of the time. Truly managing risk requires addressing both visibility and comprehension.
The surprisingly smooth, exponential trend in AI capabilities is viewed as more than just a technical machine learning phenomenon. It reflects broader economic dynamics, such as competition between firms, resource allocation, and investment cycles. This economic underpinning suggests the trend may be more robust and systematic than if it were based on isolated technical breakthroughs alone.
Humans naturally project the future in a straight line, but disruptive innovations like Tesla's grow exponentially. Progress seems slow, then explodes, catching linear thinkers by surprise after the biggest investment gains have already been made, creating a gap between perception and reality.
With past shifts like the internet or mobile, we understood the physical constraints (e.g., modem speeds, battery life). With generative AI, we lack a theoretical understanding of its scaling potential, making it impossible to forecast its ultimate capabilities beyond "vibes-based" guesses from experts.
The world has never been truly deterministic, but slower cycles of change made deterministic thinking a less costly error. Today, the rapid pace of technological and social change means that acting as if the world is predictable gets punished much more quickly and severely.
Seemingly sudden crashes in tech and markets are not abrupt events but the result of "interpretation debt"—when a system's output capability grows faster than the collective ability to understand, review, and trust it, leading to a quiet erosion of trust.
To grasp AI's potential impact, imagine compressing 100 years of progress (1925-2025)—from atomic bombs to the internet and major social movements—into ten years. Human institutions, which don't speed up, would face enormous challenges, making high-stakes decisions on compressed, crisis-level timelines.
AI struggles with tasks requiring long and wide context, like software engineering. Because adding a linear amount of context requires an exponential increase in compute power, it cannot effectively manage the complex interdependencies of large projects.