We scan new podcasts and send you the top 5 insights daily.
The fear of 'superhuman' AI is based on a flawed premise. Our definition of measurable intelligence—tallying numbers, memorizing lists—was created for the industrial workforce. AI is simply automating these now-outdated tasks, suggesting we need to recalibrate our measurement of human intelligence itself.
Fears of a superintelligent AI takeover are based on 'thinkism'—the flawed belief that intelligence trumps all else. To have an effect in the real world requires other traits like perseverance and empathy. Intelligence is necessary but not sufficient, and the will to survive will always overwhelm the will to predate.
The current panic over AI stems from a limited view of human capability, a byproduct of an Industrial Age that prized machine-like efficiency. As AI automates those tasks, we are being forced to rediscover core human skills like imagination, creativity, and collaboration that have driven progress for millennia, thus underestimating our own adaptability.
AI intelligence shouldn't be measured with a single metric like IQ. AIs exhibit "jagged intelligence," being superhuman in specific domains (e.g., mastering 200 languages) while simultaneously lacking basic capabilities like long-term planning, making them fundamentally unlike human minds.
Framing AGI as reaching human-level intelligence is a limiting concept. Unconstrained by biology, AI will rapidly surpass the best human experts in every field. The focus should be on harnessing this superhuman capability, not just achieving parity.
AI's capabilities are highly uneven. Models are already superhuman in specific domains like speaking 150 languages or possessing encyclopedic knowledge. However, they still fail at tasks typical humans find easy, such as continual learning or nuanced visual reasoning like understanding perspective in a photo.
Current AI models resemble a student who grinds 10,000 hours on a narrow task. They achieve superhuman performance on benchmarks but lack the broad, adaptable intelligence of someone with less specific training but better general reasoning. This explains the gap between eval scores and real-world utility.
Previous technologies replaced physical or rote mental labor. AI is a categorical error to view similarly because it's the first tool that can think and execute. It replaces the pattern-recognition and reasoning layer *above* the task, representing a zero-to-one moment in technological change.
Instead of fearing AI's superior cognitive intelligence (IQ), humans should focus on cultivating wisdom, intuition, and embodied intelligence. Dr. el Kaliouby suggests this is a uniquely human advantage that technology cannot replicate, allowing us to leverage AI without being defined or replaced by it.
Defining AGI as 'human-equivalent' is too limiting because human intelligence is capped by biology (e.g., an IQ of ~160). The truly transformative moment is when AI systems surpass these biological limits, providing access to problem-solving capabilities that are fundamentally greater than any human's.
We perceive complex math as a pinnacle of intelligence, but for AI, it may be an easier problem than tasks we find trivial. Like chess, which computers mastered decades ago, solving major math problems might not signify human-level reasoning but rather that the domain is surprisingly susceptible to computational approaches.