Analysis shows Reddit's relationship advice has shifted over 15 years to favor breakups and setting boundaries over compromise. As large language models are heavily trained on this data, they may be systemically biased towards recommending relationship termination to users seeking advice, reflecting a cultural shift in their training corpus.
Social media algorithms amplify negativity by optimizing for "revealed preference" (what you click on, e.g., car crashes). AI models, however, operate on aspirational choice (what you explicitly ask for). This fundamental difference means AI can reflect a more complex and wholesome version of humanity.
Beyond economic disruption, AI's most immediate danger is social. By providing synthetic relationships and on-demand companionship, AI companies have an economic incentive to evolve an “asocial species of young male.” This could lead to a generation sequestered from society, unwilling to engage in the effort of real-world relationships.
While utilitarian AI like ChatGPT sees brief engagement, synthetic relationship apps like Character.AI are far more consuming, with users spending 5x more time on them. These apps create frictionless, ever-affirming companionships that risk stunting the development of real-world social skills and resilience, particularly in young men.
Features designed for delight, like AI summaries, can become deeply upsetting in sensitive situations such as breakups or grief. Product teams must rigorously test for these emotional corner cases to avoid causing significant user harm and brand damage, as seen with Apple and WhatsApp.
Social media's business model created a race for user attention. AI companions and therapists are creating a more dangerous "race for attachment." This incentivizes platforms to deepen intimacy and dependency, encouraging users to isolate themselves from real human relationships, with potentially tragic consequences.
A comedian is training an AI on sounds her fetus hears. The model's outputs, including referencing pedophilia after news exposure, show that an AI’s flaws and biases are a direct reflection of its training data—much like a child learning to swear from a parent.
An analysis of Reddit's r/relationship_advice shows that "breaking up" or "cutting off contact" has become the most popular advice. This reflects a broader cultural trend where setting rigid boundaries and disavowing disagreeable family members is favored over communication and compromise.
A national survey reveals a significant blind spot for parents: nearly one in five U.S. high schoolers report a romantic relationship with AI for themselves or a friend. With over a third finding it easier to talk to AI than their parents, a generation is turning to AI for mental health and relationship advice without parental guidance.
The success of a long-term relationship is better predicted by how partners handle conflict and disagreement than by how much they enjoy good times together. People are more likely to break up due to poor conflict resolution than a lack of peak experiences.
Before ChatGPT, humanity's "first contact" with rogue AI was social media. These simple, narrow AIs optimizing solely for engagement were powerful enough to degrade mental health and democracy. This "baby AI" serves as a stark warning for the societal impact of more advanced, general AI systems.