Drawing from his Meta experience, Nick Clegg directly counsels that AI leaders will become permanent fixtures in Washington D.C. hearings if they don't solve age-gating before launching adult-oriented AI features. The societal backlash is guaranteed and will be more intense than for social media.
The political landscape for AI is not a simple binary. Policy expert Dean Ball identifies three key factions: AI safety advocates, a pro-AI industry camp, and an emerging "truly anti-AI" group. The decisive factor will be which direction the moderate "consumer protection" and "kids safety" advocates lean.
Following Australia's recent law restricting social media access to users 16 and older, Europe is now considering similar legislation. This signals a potential worldwide regulatory shift towards stricter age-gating, which could fundamentally alter user acquisition and marketing strategies for platforms and teen-focused brands.
The discourse around AI risk has matured beyond sci-fi scenarios like Terminator. The focus is now on immediate, real-world problems such as AI-induced psychosis, the impact of AI romantic companions on birth rates, and the spread of misinformation, requiring a different approach from builders and policymakers.
An OpenAI investor call revealed that "time spent" on ChatGPT declined due to content restrictions. The subsequent decision to allow erotica is not just a policy shift but a direct strategic response aimed at stimulating user engagement and reversing the negative trend.
AI is experiencing a political backlash from day one, unlike social media's long "honeymoon" period. This is largely self-inflicted, as industry leaders like Sam Altman have used apocalyptic, "it might kill everyone" rhetoric as a marketing tool, creating widespread fear before the benefits are fully realized.
Former Meta exec Nick Clegg warns that AI's intimate nature means any failure to protect minors from adult content will trigger a societal backlash far larger than what social media faced. The technology for reliable age verification is not yet mature enough for this risk.
AI analyst Johan Falk argues that the emotional and social harms of AI companions are poorly understood and potentially severe, citing risks beyond extreme cases like suicide. He advocates for a prohibition for users under 18 until the psychological impacts are better researched.
The rush to integrate generative AI into toys has created severe, unforeseen risks beyond simple malfunctions. AI-powered toys have given children dangerous advice (about knives and matches), raised privacy concerns, and in some cases, have even been found to be pitching Chinese state propaganda.
OpenAI is relaxing ChatGPT's restrictions, allowing verified adults to access mature content and customize its personality. This marks a significant policy shift from broad safety guardrails to user choice, acknowledging that adults want more freedom in how they interact with AI, even for sensitive topics like erotica.
As AI becomes more sophisticated, users will form deep emotional dependencies. This creates significant psychological and ethical dilemmas, especially for vulnerable users like teens, which AI companies must proactively and conservatively manage, even when facing commercial pressures.