A regulator who approves a new technology that fails faces immense public backlash and career ruin. Conversely, they receive little glory for a success. This asymmetric risk profile creates a powerful incentive to deny or delay new innovations, preserving the status quo regardless of potential benefits.
New technologies perceived as job-destroying, like AI, face significant public and regulatory risk. A powerful defense is to make the general public owners of the technology. When people have a financial stake in a technology's success, they are far more likely to defend it than fight against it.
Leading AI companies allegedly stoke fears of existential risk not for safety, but as a deliberate strategy to achieve regulatory capture. By promoting scary narratives, they advocate for complex pre-approval systems that would create insurmountable barriers for new startups, cementing their own market dominance.
Regulating technology based on anticipating *potential* future harms, rather than known ones, is a dangerous path. This 'precautionary principle,' common in Europe, stifles breakthrough innovation. If applied historically, it would have blocked transformative technologies like the automobile or even nuclear power, which has a better safety record than oil.
Construction projects have limited upside (e.g., 10-15% under budget) but massive downside (100-300%+ over budget). This skewed risk profile rationally incentivizes builders to stick with predictable, traditional methods rather than adopt new technologies that could lead to catastrophic overruns.
Silicon Valley's economic engine is "permissionless innovation"—the freedom to build without prior government approval. Proposed AI regulations requiring pre-approval for new models would dismantle this foundation, favoring large incumbents with lobbying power and stifling the startup ecosystem.
An FDA-style regulatory model would force AI companies to make a quantitative safety case for their models before deployment. This shifts the burden of proof from regulators to creators, creating powerful financial incentives for labs to invest heavily in safety research, much like pharmaceutical companies invest in clinical trials.
Product managers at large AI labs are incentivized to ship safe, incremental features rather than risky, opinionated products. This structural aversion to risk creates a permanent market opportunity for startups to build bold, niche applications that incumbents are organizationally unable to pursue.
The public holds new technologies to a much higher safety standard than human performance. Waymo could deploy cars that are statistically safer than human drivers, but society would not accept them killing tens of thousands of people annually, even if it's an improvement. This demonstrates the need for near-perfection in high-stakes tech launches.
The public sector's aversion to risk is driven by the constant external threat of audits and public hearings from bodies like the GAO and Congress. This compliance-focused environment stifles innovation and discourages the "measured risk" taking necessary to attract modern tech talent who thrive on cutting-edge work.
People resist new initiatives because the "switching costs" (effort, money, time) are felt upfront and are guaranteed. In contrast, the potential benefits are often far in the future and not guaranteed. This timing and certainty gap creates a powerful psychological bias for the status quo.