Regulating technology based on anticipating *potential* future harms, rather than known ones, is a dangerous path. This 'precautionary principle,' common in Europe, stifles breakthrough innovation. If applied historically, it would have blocked transformative technologies like the automobile or even nuclear power, which has a better safety record than oil.

Related Insights

Corporate creativity follows a bell curve. Early-stage companies and those facing catastrophic failure (the tails) are forced to innovate. Most established companies exist in the middle, where repeating proven playbooks and playing it safe stifles true risk-taking.

Instead of defaulting to skepticism and looking for reasons why something won't work, the most productive starting point is to imagine how big and impactful a new idea could become. After exploring the optimistic case, you can then systematically address and mitigate the risks.

New technologies perceived as job-destroying, like AI, face significant public and regulatory risk. A powerful defense is to make the general public owners of the technology. When people have a financial stake in a technology's success, they are far more likely to defend it than fight against it.

Many top AI CEOs openly admit the extinction-level risks of their work, with some estimating a 25% chance. However, they feel powerless to stop the race. If a CEO paused for safety, investors would simply replace them with someone willing to push forward, creating a systemic trap where everyone sees the danger but no one can afford to hit the brakes.

A ban on superintelligence is self-defeating because enforcement would require a sanctioned, global government body to build the very technology it prohibits in order to "prove it's safe." This paradoxically creates a state-controlled monopoly on the most powerful technology ever conceived, posing a greater risk than a competitive landscape.

Innovation doesn't happen without risk-taking. What we call speculation is the essential fuel that allows groundbreaking ideas, like those of Elon Musk, to get funded and developed. While dangerous, attempting to eliminate speculative bubbles entirely would also stifle world-changing progress.

The most profound innovations in history, like vaccines, PCs, and air travel, distributed value broadly to society rather than being captured by a few corporations. AI could follow this pattern, benefiting the public more than a handful of tech giants, especially with geopolitical pressures forcing commoditization.

AI will create negative consequences, like the internet spawned the dark web. However, its potential to solve major problems like disease and energy scarcity makes its development a net positive for society, justifying the risks that must be managed along the way.

Afeyan distinguishes risk (known probabilities) from uncertainty (unknown probabilities). Since breakthrough innovation deals with the unknown, traditional risk/reward models fail. The correct strategy is not to mitigate risk but to pursue multiple, diverse options to navigate uncertainty.

An anonymous CEO of a leading AI company told Stuart Russell that a massive disaster is the *best* possible outcome. They believe it is the only event shocking enough to force governments to finally implement meaningful safety regulations, which they currently refuse to do despite private warnings.

The 'Precautionary Principle' in Regulation Kills Foundational Innovation | RiffOn