The assumption that superintelligence will inevitably rule is flawed. In human society, raw IQ is not the primary determinant of power, as evidenced by PhDs often working for MBAs. This suggests an AGI wouldn't automatically dominate humanity simply by being smarter.
Fears of a superintelligent AI takeover are based on 'thinkism'—the flawed belief that intelligence trumps all else. To have an effect in the real world requires other traits like perseverance and empathy. Intelligence is necessary but not sufficient, and the will to survive will always overwhelm the will to predate.
AI intelligence shouldn't be measured with a single metric like IQ. AIs exhibit "jagged intelligence," being superhuman in specific domains (e.g., mastering 200 languages) while simultaneously lacking basic capabilities like long-term planning, making them fundamentally unlike human minds.
All-AI organizations will struggle to replace human ones until AI masters a wide range of skills. Humans will retain a critical edge in areas like long-horizon strategy and metacognition, allowing human-AI teams to outperform purely AI systems, potentially until around 2040.
While many believe AI will primarily help average performers become great, LinkedIn's experience shows the opposite. Their top talent were the first and most effective adopters of new AI tools, using them to become even more productive. This suggests AI may amplify existing talent disparities.
A common misconception is that a super-smart entity would inherently be moral. However, intelligence is merely the ability to achieve goals. It is orthogonal to the nature of those goals, meaning a smarter AI could simply become a more effective sociopath.
Current AI models resemble a student who grinds 10,000 hours on a narrow task. They achieve superhuman performance on benchmarks but lack the broad, adaptable intelligence of someone with less specific training but better general reasoning. This explains the gap between eval scores and real-world utility.
The U.S. military discovered that leaders with an IQ more than one standard deviation above their team are often ineffective. These leaders lose 'theory of mind,' making it difficult for them to model their team's thinking, which impairs communication and connection.
The ultimate outcome of AI might not be a singular superintelligence ("Digital God") but an infinite supply of competent, 120-IQ digital workers ("Digital Guys"). While less dramatic than AGI, creating an infinite, reliable workforce would still be profoundly transformative for the global economy.
The internet leveled the playing field by making information accessible. AI will do the same for intelligence, making expertise a commodity. The new human differentiator will be the creativity and ability to define and solve novel, previously un-articulable problems.
Current AI models exhibit "jagged intelligence," performing at a PhD level on some tasks but failing at simple ones. Google DeepMind's CEO identifies this inconsistency and lack of reliability as a primary barrier to achieving true, general-purpose AGI.