Humans evolved to cooperate via reciprocity—sharing resources expecting future return. To prevent exploitation, we also evolved a strong instinct to identify and punish "freeloaders." This creates a fundamental tension with social welfare systems that can be perceived as enabling non-contribution.
Our brains evolved a highly sensitive system to detect human-like minds, crucial for social cooperation and survival. This system often produces 'false positives,' causing us to humanize pets or robots. This isn't a bug but a feature, ensuring we never miss an actual human encounter, a trade-off vital to our species' success.
It is a profound mystery how evolution hardcodes abstract social desires (e.g., reputation) into our genome. Unlike simple sensory rewards, these require complex cognitive processing to even identify. Solving this could unlock powerful new methods for instilling robust, high-level values in AI systems.
When lending money to friends, Emma Hernan operates under the assumption she may not be repaid. By mentally reframing the loan as a potential gift, she avoids resentment and preserves the friendship, regardless of the financial outcome. This protects her own well-being and relationships from financial strain.
Our anger towards hypocrisy stems from a perceived 'false signal.' A hypocrite gains status (respect, trust) without paying the cost of their claimed principles. This triggers our deep sense of injustice about an unfair exchange, making the violation about social standing more than just morality.
Trust isn't built on words. It's revealed through "honest signals"—non-verbal cues and, most importantly, the pattern of reciprocal interaction. Observing how people exchange help and information can predict trust and friendship with high accuracy, as it demonstrates a relationship of mutual give-and-take.
Human societies are not innately egalitarian; they are innately hierarchical. Egalitarianism emerged as a social technology in hunter-gatherer groups, using tools like gossip and ostracism to collectively suppress dominant 'alpha' individuals who threatened group cohesion.
People often mistake cynicism for intelligence. However, research shows it's a protective measure used by those with poorer reasoning skills to avoid being taken advantage of. This self-protection leads them to miss out on positive human interactions by assuming the worst in others.
Generosity towards employees and customers is more than just good ethics; it's a strategic move in the iterated game of business. It signals your intent to cooperate, which encourages reciprocal cooperation from others. This builds trust and leads to superior long-term outcomes versus a defect-first approach.
The biggest unlock for a successful long-term partnership is to stop keeping score. Instead of tracking contributions and demanding reciprocity, one should define their own standard for being a good partner and live up to it. This approach avoids the bias of overvaluing one's own contributions, preventing transactional resentment.
To build robust social intelligence, AIs cannot be trained solely on positive examples of cooperation. Like pre-training an LLM on all of language, social AIs must be trained on the full manifold of game-theoretic situations—cooperation, competition, team formation, betrayal. This builds a foundational, generalizable model of social theory of mind.