Sports regulators tolerated Speedo's polyurethane swimsuit until it and its successors led to an "unbelievable rate" of broken records (147 in 2009 alone). The sheer velocity of improvement, which felt jarring and unnatural, prompted a complete ban, not just the initial innovation.

Related Insights

Society rarely bans powerful new technologies, no matter how dangerous. Instead, like with fire, we develop systems to manage risk (e.g., fire departments, alarms). This provides a historical lens for current debates around transformative technologies like AI, suggesting adaptation over prohibition.

Initial public fear over new technologies like AI therapy, while seemingly negative, is actually productive. It creates the social and political pressure needed to establish essential safety guardrails and regulations, ultimately leading to safer long-term adoption.

The pace of AI-driven innovation has accelerated so dramatically that marginal improvements are quickly rendered obsolete. Founders must pursue ideas that offer an order-of-magnitude change to their industry, as anything less will be overtaken by the next wave of technology.

A regulator who approves a new technology that fails faces immense public backlash and career ruin. Conversely, they receive little glory for a success. This asymmetric risk profile creates a powerful incentive to deny or delay new innovations, preserving the status quo regardless of potential benefits.

Regulating technology based on anticipating *potential* future harms, rather than known ones, is a dangerous path. This 'precautionary principle,' common in Europe, stifles breakthrough innovation. If applied historically, it would have blocked transformative technologies like the automobile or even nuclear power, which has a better safety record than oil.

Speedo exploited World Aquatics' lenient definition of "fabric" to incorporate polyurethane panels into its LZR Racer swimsuit. This seemingly small loophole allowed for a game-changing product that created less drag, giving swimmers a significant advantage, and forcing the entire industry and its regulators to react.

The public holds new technologies to a much higher safety standard than human performance. Waymo could deploy cars that are statistically safer than human drivers, but society would not accept them killing tens of thousands of people annually, even if it's an improvement. This demonstrates the need for near-perfection in high-stakes tech launches.

The history of nuclear power, where regulation transformed an exponential growth curve into a flat S-curve, serves as a powerful warning for AI. This suggests that AI's biggest long-term hurdle may not be technical limits but regulatory intervention that stifles its potential for a "fast takeoff," effectively regulating it out of rapid adoption.

The moment an industry organizes in protest against an AI technology, it signals that the technology has crossed a critical threshold of quality. The fear and backlash are a direct result of the technology no longer being a gimmick, but a viable threat to the status quo.

The popular Silicon Valley mantra often masks a willingness to create negative externalities for others—be it other businesses, users, or even legal frameworks. It serves as a permission slip to avoid the hard work of considering consequences.