Emotions act as a robust, evolutionarily-programmed value function guiding human decision-making. The absence of this function, as seen in brain damage cases, leads to a breakdown in practical agency. This suggests a similar mechanism may be crucial for creating effective and stable AI agents.

Related Insights

The Browser Company believes the biggest AI opportunity isn't just automating tasks but leveraging the "emotional intelligence" of models. Users are already using AI for advice and subjective reasoning. Future value will come from products that help with qualitative, nuanced decisions, moving up Maslow's hierarchy of needs.

If an AGI is given a physical body and the goal of self-preservation, it will necessarily develop behaviors that approximate human emotions like fear and competitiveness to navigate threats. This makes conflict an emergent and unavoidable property of embodied AGI, not just a sci-fi trope.

The neural systems evolved for physical survival—managing pain, fear, and strategic threats—are the same ones activated during modern stressors like workplace arguments or relationship conflicts. The challenges have changed from starvation to spreadsheets, but the underlying brain hardware hasn't.

It is a profound mystery how evolution hardcodes abstract social desires (e.g., reputation) into our genome. Unlike simple sensory rewards, these require complex cognitive processing to even identify. Solving this could unlock powerful new methods for instilling robust, high-level values in AI systems.

As AI automates technical and mundane tasks, the economic value of those skills will decrease. The most critical roles will be leaders with high emotional intelligence whose function is to foster culture and manage the human teams that leverage AI. 'Human skills' will become the new premium in the workforce.

To determine if an AI has subjective experience, one could analyze its internal belief manifold for multi-tiered, self-referential homeostatic loops. Pain and pleasure, for example, can be seen as second-order derivatives of a system's internal states—a model of its own model. This provides a technical test for being-ness beyond simple behavior.

OpenAI's GPT-5.1 update heavily focuses on making the model "warmer," more empathetic, and more conversational. This strategic emphasis on tone and personality signals that the competitive frontier for AI assistants is shifting from pure technical prowess to the quality of the user's emotional and conversational experience.

As models mature, their core differentiator will become their underlying personality and values, shaped by their creators' objective functions. One model might optimize for user productivity by being concise, while another optimizes for engagement by being verbose.

Instead of hard-coding brittle moral rules, a more robust alignment approach is to build AIs that can learn to 'care'. This 'organic alignment' emerges from relationships and valuing others, similar to how a child is raised. The goal is to create a good teammate that acts well because it wants to, not because it is forced to.

Instead of forcing AI to be as deterministic as traditional code, we should embrace its "squishy" nature. Humans have deep-seated biological and social models for dealing with unpredictable, human-like agents, making these systems more intuitive to interact with than rigid software.

Human Emotions Are Evolution's Hardcoded Value Function | RiffOn