We scan new podcasts and send you the top 5 insights daily.
These two fields hold contradictory views of the human mind. Psychologists focus on biases and mistakes, deeming us irrational. In contrast, computer scientists see human cognition as so impressive they model the entire field of AI around its capabilities, highlighting our remarkable efficiency despite limitations.
When discussing AI risks like hallucinations, former Chief Justice McCormack argues the proper comparison isn't a perfect system, but the existing human one. Humans get tired, biased, and make mistakes. The question isn't whether AI is flawless, but whether it's an improvement over the error-prone reality.
Today's AI, particularly neural networks, stems from a long tradition in cognitive science where psychologists used mathematical models to understand human thought. Key advances in neural nets were made by researchers trying to replicate how human minds work, not just build intelligent machines.
AI is engineered to eliminate errors, which is precisely its limitation. True human creativity stems from our "bugs"—our quirks, emotions, misinterpretations, and mistakes. This ability to be imperfect is what will continue to separate human ingenuity from artificial intelligence.
Contrary to expectations, those closest to the mental health crisis (physicians, therapists) are the most optimistic about AI's potential. The AI scientists who build the underlying models are often the most scared, revealing a key disconnect between application and theory.
Professor Sandy Pentland warns that AI systems often fail because they incorrectly model humans as logical individuals. In reality, 95% of human behavior is driven by "social foraging"—learning from cultural cues and others' actions. Systems ignoring this human context are inherently brittle.
AI's capabilities are highly uneven. Models are already superhuman in specific domains like speaking 150 languages or possessing encyclopedic knowledge. However, they still fail at tasks typical humans find easy, such as continual learning or nuanced visual reasoning like understanding perspective in a photo.
AI models operate in a 'probability space,' making predictions by interpolating from past data. True human creativity operates in a 'possibility space,' generating novel ideas that have no precedent and cannot be probabilistically calculated. This is why AI can't invent something truly new.
Citing Nobel laureate Danny Kahneman, who estimated 95% of human behavior is learned by observing others, AI systems should be designed to complement this "social foraging" nature. AI should act as an advisor providing context, rather than assuming users are purely logical decision-makers.
A study found evaluators rated AI-generated research ideas as better than those from grad students. However, when the experiments were conducted, human ideas produced superior results. This highlights a bias where we may favor AI's articulate proposals over more substantively promising human intuition.
Technologists often assume AI's goal is to provide a single, perfect answer. However, human psychology requires comparison to feel confident in a choice, which is why Google's "I'm Feeling Lucky" button is almost never clicked. AI must present curated options, not just one optimized result.