Mala Gaonkar's philanthropic work highlights a key limitation of AI: it excels at predicting "what" will happen but not "why." By integrating behavioral data, her organization aims to uncover the motivations behind human choices, enabling more effective interventions in areas like public health.

Related Insights

AI excels where success is quantifiable (e.g., code generation). Its greatest challenge lies in subjective domains like mental health or education. Progress requires a messy, societal conversation to define 'success,' not just a developer-built technical leaderboard.

The Browser Company believes the biggest AI opportunity isn't just automating tasks but leveraging the "emotional intelligence" of models. Users are already using AI for advice and subjective reasoning. Future value will come from products that help with qualitative, nuanced decisions, moving up Maslow's hierarchy of needs.

Emerging AI jobs, like agent trainers and operators, demand uniquely human capabilities such as a grasp of psychology and ethics. The need for a "bedside manner" in handling AI-related customer issues highlights that the future of AI work isn't purely technical.

AI models can provide answers, but they lack innate curiosity. The unique and enduring value of humans, especially in fields like journalism, is their ability to ask insightful questions. This positions human curiosity as the essential driver for AI, rather than a skill that AI will replace.

The next wave of consumer AI will shift from individual productivity to fostering connectivity. AI agents will facilitate interactions between people, helping them understand each other better and addressing the core human need to 'be seen,' creating new social dynamics.

As AI models are used for critical decisions in finance and law, black-box empirical testing will become insufficient. Mechanistic interpretability, which analyzes model weights to understand reasoning, is a bet that society and regulators will require explainable AI, making it a crucial future technology.

The promise of "techno-solutionism" falls flat when AI is applied to complex social issues. An AI project in Argentina meant to predict teen pregnancy simply confirmed that poverty was the root cause—a conclusion that didn't require invasive data collection and that technology alone could not fix, exposing the limits of algorithmic intervention.

For AI systems to be adopted in scientific labs, they must be interpretable. Researchers need to understand the 'why' behind an AI's experimental plan to validate and trust the process, making interpretability a more critical feature than raw predictive power.

Citing Nobel laureate Danny Kahneman, who estimated 95% of human behavior is learned by observing others, AI systems should be designed to complement this "social foraging" nature. AI should act as an advisor providing context, rather than assuming users are purely logical decision-makers.

AI systems often collapse because they are built on the flawed assumption that humans are logical and society is static. Real-world failures, from Soviet economic planning to modern systems, stem from an inability to model human behavior, data manipulation, and unexpected events.