Citing Nobel laureate Danny Kahneman, who estimated 95% of human behavior is learned by observing others, AI systems should be designed to complement this "social foraging" nature. AI should act as an advisor providing context, rather than assuming users are purely logical decision-makers.
Instead of relying on instinctual "System 1" rules, advanced AI should use deliberative "System 2" reasoning. By analyzing consequences and applying ethical frameworks—a process called "chain of thought monitoring"—AIs could potentially become more consistently ethical than humans who are prone to gut reactions.
The most effective use of AI isn't full automation, but "hybrid intelligence." This framework ensures humans always remain central to the decision-making process, with AI serving in a complementary, supporting role to augment human intuition and strategy.
The most effective AI user experiences are skeuomorphic, emulating real-world human interactions. Design an AI onboarding process like you would hire a personal assistant: start with small tasks, verify their work to build trust, and then grant more autonomy and context over time.
Technologists often assume AI's goal is to provide a single, perfect answer. However, human psychology requires comparison to feel confident in a choice, which is why Google's "I'm Feeling Lucky" button is almost never clicked. AI must present curated options, not just one optimized result.
As AI automates technical design tasks, the uniquely human ability to understand user psychology becomes a critical, defensible differentiator. This deep understanding is necessary for engineering user habits and genuine connection, something AI cannot yet replicate authentically.
Instead of hard-coding brittle moral rules, a more robust alignment approach is to build AIs that can learn to 'care'. This 'organic alignment' emerges from relationships and valuing others, similar to how a child is raised. The goal is to create a good teammate that acts well because it wants to, not because it is forced to.
AI systems often collapse because they are built on the flawed assumption that humans are logical and society is static. Real-world failures, from Soviet economic planning to modern systems, stem from an inability to model human behavior, data manipulation, and unexpected events.
The promise of AI shouldn't be a one-click solution that removes the user. Instead, AI should be a collaborative partner that augments human capacity. A successful AI product leaves room for user participation, making them feel like they are co-building the experience and have a stake in the outcome.
Instead of forcing AI to be as deterministic as traditional code, we should embrace its "squishy" nature. Humans have deep-seated biological and social models for dealing with unpredictable, human-like agents, making these systems more intuitive to interact with than rigid software.
To build robust social intelligence, AIs cannot be trained solely on positive examples of cooperation. Like pre-training an LLM on all of language, social AIs must be trained on the full manifold of game-theoretic situations—cooperation, competition, team formation, betrayal. This builds a foundational, generalizable model of social theory of mind.