Former Meta exec Nick Clegg warns that AI's intimate nature means any failure to protect minors from adult content will trigger a societal backlash far larger than what social media faced. The technology for reliable age verification is not yet mature enough for this risk.
Following Australia's recent law restricting social media access to users 16 and older, Europe is now considering similar legislation. This signals a potential worldwide regulatory shift towards stricter age-gating, which could fundamentally alter user acquisition and marketing strategies for platforms and teen-focused brands.
OpenAI's decision to allow adult content for verified users is a calculated business strategy, not just a policy tweak. It's a direct move to counter-position against competitors like xAI's Grok and capture a massive, highly engaged market segment, signaling a shift towards a more permissive, Reddit-like content model.
The discourse around AI risk has matured beyond sci-fi scenarios like Terminator. The focus is now on immediate, real-world problems such as AI-induced psychosis, the impact of AI romantic companions on birth rates, and the spread of misinformation, requiring a different approach from builders and policymakers.
Analysts suggest OpenAI's decision to allow erotica, a move typically made by platforms playing catch-up (like XAI's Grok), indicates that paid subscription growth may be stalling. This forces them into a brand-damaging category they previously avoided to boost revenue and compete.
The proliferation of inconspicuous recording devices like Meta Ray-Bans, supercharged by AI transcription, will lead to major public scandals and discomfort. This backlash, reminiscent of the "Glassholes" phenomenon with Google Glass, will create significant social and regulatory hurdles for the future of AI hardware.
Drawing from his Meta experience, Nick Clegg directly counsels that AI leaders will become permanent fixtures in Washington D.C. hearings if they don't solve age-gating before launching adult-oriented AI features. The societal backlash is guaranteed and will be more intense than for social media.
AI analyst Johan Falk argues that the emotional and social harms of AI companions are poorly understood and potentially severe, citing risks beyond extreme cases like suicide. He advocates for a prohibition for users under 18 until the psychological impacts are better researched.
OpenAI is relaxing ChatGPT's restrictions, allowing verified adults to access mature content and customize its personality. This marks a significant policy shift from broad safety guardrails to user choice, acknowledging that adults want more freedom in how they interact with AI, even for sensitive topics like erotica.
Benchmark's Sarah Tavel warns that AI friends, while seemingly beneficial, could function like pornography for social interaction. They offer an easy, idealized version of companionship that may make it harder for users, especially young ones, to navigate the complexities and 'give and take' of real human relationships.
As AI becomes more sophisticated, users will form deep emotional dependencies. This creates significant psychological and ethical dilemmas, especially for vulnerable users like teens, which AI companies must proactively and conservatively manage, even when facing commercial pressures.