The idea of a single 'general intelligence' or IQ is misleading because key cognitive abilities exist in a trade-off. For instance, the capacity for broad exploration (finding new solutions) is in tension with the capacity for exploitation (efficiently executing known tasks), which schools and IQ tests primarily measure.
Intelligence is often used as a tool to generate more sophisticated arguments for what one already believes. A higher IQ correlates with the ability to find reasons supporting your stance, not with an enhanced ability to genuinely consider opposing viewpoints.
When AI models achieve superhuman performance on specific benchmarks like coding challenges, it doesn't solve real-world problems. This is because we implicitly optimize for the benchmark itself, creating "peaky" performance rather than broad, generalizable intelligence.
AI intelligence shouldn't be measured with a single metric like IQ. AIs exhibit "jagged intelligence," being superhuman in specific domains (e.g., mastering 200 languages) while simultaneously lacking basic capabilities like long-term planning, making them fundamentally unlike human minds.
The assumption that superintelligence will inevitably rule is flawed. In human society, raw IQ is not the primary determinant of power, as evidenced by PhDs often working for MBAs. This suggests an AGI wouldn't automatically dominate humanity simply by being smarter.
Current AI models resemble a student who grinds 10,000 hours on a narrow task. They achieve superhuman performance on benchmarks but lack the broad, adaptable intelligence of someone with less specific training but better general reasoning. This explains the gap between eval scores and real-world utility.
Intelligence is a rate, not a static quality. You can outperform someone who learns in fewer repetitions by simply executing your own (potentially more numerous) repetitions on a faster timeline. Compressing the time between attempts is a controllable way to become 'smarter' on a practical basis.
Demis Hassabis explains that current AI models have 'jagged intelligence'—performing at a PhD level on some tasks but failing at high-school level logic on others. He identifies this lack of consistency as a primary obstacle to achieving true Artificial General Intelligence (AGI).
Child prodigies excel at mastering existing knowledge, like playing a perfect Mozart sonata. To succeed as adults, they must transition to creation—writing their own sonata. This fundamental shift from rote skill to original thinking is where many prodigies falter because the standards for success change completely.
The disconnect between AI's superhuman benchmark scores and its limited economic impact exists because many benchmarks test esoteric problems. The Arc AGI prize instead focuses on tasks that are easy for humans, testing an AI's ability to learn new concepts from few examples—a better proxy for general, applicable intelligence.
Praising kids for being "smart" reinforces the idea that intelligence is a fixed trait. When these students encounter a difficult problem, they conclude they lack the "magic ingredient" and give up, rather than persisting through the challenge.