Instead of relying on slow government action, society can self-regulate harmful technologies by developing cultural "antibodies." Just as social pressure made smoking and junk food undesirable, a similar collective shift can create costs for entrepreneurs building socially negative products like sex bots.

Related Insights

Rather than government regulation, market forces will address AI bias. As studies reveal biases in models from OpenAI and Google, competitors like Elon Musk's Grok can market their model's neutrality as a key selling point, attracting users and forcing the entire market to improve.

Broad, high-level statements calling for an AI ban are not intended as draft legislation but as tools to build public consensus. This strategy mirrors past social movements, where achieving widespread moral agreement on a vague principle (e.g., against child pornography) was a necessary precursor to creating detailed, expert-crafted laws.

Society rarely bans powerful new technologies, no matter how dangerous. Instead, like with fire, we develop systems to manage risk (e.g., fire departments, alarms). This provides a historical lens for current debates around transformative technologies like AI, suggesting adaptation over prohibition.

While government regulation might seem simpler, Susan Wojcicki suggests it would be too slow to address rapidly evolving threats like new COVID-19 conspiracies. She argues that a private company can make more detailed, fine-grained policy decisions much faster than a legislative body could, framing self-regulation as a matter of speed and specificity.

Initial public fear over new technologies like AI therapy, while seemingly negative, is actually productive. It creates the social and political pressure needed to establish essential safety guardrails and regulations, ultimately leading to safer long-term adoption.

Instead of trying to legally define and ban 'superintelligence,' a more practical approach is to prohibit specific, catastrophic outcomes like overthrowing the government. This shifts the burden of proof to AI developers, forcing them to demonstrate their systems cannot cause these predefined harms, sidestepping definitional debates.

A regulator who approves a new technology that fails faces immense public backlash and career ruin. Conversely, they receive little glory for a success. This asymmetric risk profile creates a powerful incentive to deny or delay new innovations, preserving the status quo regardless of potential benefits.

Demis Hassabis argues that market forces will drive AI safety. As enterprises adopt AI agents, their demand for reliability and safety guardrails will commercially penalize 'cowboy operations' that cannot guarantee responsible behavior. This will naturally favor more thoughtful and rigorous AI labs.

The most significant barrier to creating a safer AI future is the pervasive narrative that its current trajectory is inevitable. The logic of "if I don't build it, someone else will" creates a self-fulfilling prophecy of recklessness, preventing the collective action needed to steer development.

The UK's strategy of criminalizing specific harmful AI outcomes, like non-consensual deepfakes, is more effective than the EU AI Act's approach of regulating model size and development processes. Focusing on harmful outcomes is a more direct way to mitigate societal damage.