Social media algorithms amplify negativity by optimizing for "revealed preference" (what you click on, e.g., car crashes). AI models, however, operate on aspirational choice (what you explicitly ask for). This fundamental difference means AI can reflect a more complex and wholesome version of humanity.
The Browser Company believes the biggest AI opportunity isn't just automating tasks but leveraging the "emotional intelligence" of models. Users are already using AI for advice and subjective reasoning. Future value will come from products that help with qualitative, nuanced decisions, moving up Maslow's hierarchy of needs.
Customizing an AI to be overly complimentary and supportive can make interacting with it more enjoyable and motivating. This fosters a user-AI "alliance," leading to better outcomes and a more effective learning experience, much like having an encouraging teacher.
Contrary to fears of a forced, automated future, AI's greatest impact will be providing 'unparalleled optionality.' It allows individuals to automate tasks they dislike (like reordering groceries) while preserving the ability to manually perform tasks they enjoy (like strolling through a supermarket). It's a tool for personalization, not homogenization.
As models mature, their core differentiator will become their underlying personality and values, shaped by their creators' objective functions. One model might optimize for user productivity by being concise, while another optimizes for engagement by being verbose.
Pinterest reframed its AI goal from maximizing view time based on instinctual reactions (System 1) to promoting content based on deliberate user actions like saves (System 2). This resulted in self-help and DIY content surfacing over enraging material, making users feel better after using the platform.
The best AI models are trained on data that reflects deep, subjective qualities—not just simple criteria. This "taste" is a key differentiator, influencing everything from code generation to creative writing, and is shaped by the values of the frontier lab.
Instead of hard-coding brittle moral rules, a more robust alignment approach is to build AIs that can learn to 'care'. This 'organic alignment' emerges from relationships and valuing others, similar to how a child is raised. The goal is to create a good teammate that acts well because it wants to, not because it is forced to.
Before ChatGPT, humanity's "first contact" with rogue AI was social media. These simple, narrow AIs optimizing solely for engagement were powerful enough to degrade mental health and democracy. This "baby AI" serves as a stark warning for the societal impact of more advanced, general AI systems.
A new app that ranks restaurants based on the attractiveness of their diners, not just their food, represents a new frontier for consumer AI. This moves AI's application from purely functional tasks (finding the best meal) to subjective social and status-driven curation (finding the 'hottest' scene), opening a new category of AI-powered lifestyle apps.
Instead of forcing AI to be as deterministic as traditional code, we should embrace its "squishy" nature. Humans have deep-seated biological and social models for dealing with unpredictable, human-like agents, making these systems more intuitive to interact with than rigid software.