Today's AI systems exhibit "jagged intelligence"—strong performance on many tasks but inconsistent reliability on others. This prevents full job replacement because being 95% effective is insufficient when the remaining 5% involves crucial edge cases, judgment, and discretion that still require human oversight.
AI models will quickly automate the majority of expert work, but they will struggle with the final, most complex 25%. For a long time, human expertise will be essential for this 'last mile,' making it the ultimate bottleneck and source of economic value.
AI's capabilities are inconsistent; it excels at some tasks and fails surprisingly at others. This is the 'jagged frontier.' You can only discover where AI is useful and where it's useless by applying it directly to your own work, as you are the only one who can accurately judge its performance in your domain.
AI's capabilities are highly uneven. Models are already superhuman in specific domains like speaking 150 languages or possessing encyclopedic knowledge. However, they still fail at tasks typical humans find easy, such as continual learning or nuanced visual reasoning like understanding perspective in a photo.
Despite marketing hype, current AI agents are not fully autonomous and cannot replace an entire human job. They excel at executing a sequence of defined tasks to achieve a specific goal, like research, but lack the complex reasoning for broader job functions. True job replacement is likely still years away.
Despite hype about full automation, AI's real-world application still has an approximate 80% success rate. The remaining 20% requires human intervention, positioning AI as a tool for human augmentation rather than complete job replacement for most business workflows today.
Despite hype in areas like self-driving cars and medical diagnosis, AI has not replaced expert human judgment. Its most successful application is as a powerful assistant that augments human experts, who still make the final, critical decisions. This is a key distinction for scoping AI products.
Unlike deterministic SaaS software that works consistently, AI is probabilistic and doesn't work perfectly out of the box. Achieving 'human-grade' performance (e.g., 99.9% reliability) requires continuous tuning and expert guidance, countering the hype that AI is an immediate, hands-off solution.
A practical definition of AGI is its capacity to function as a 'drop-in remote worker,' fully substituting for a human on long-horizon tasks. Today's AI, despite genius-level abilities in narrow domains, fails this test because it cannot reliably string together multiple tasks over extended periods, highlighting the 'jagged frontier' of its abilities.
Frontier AI models exhibit 'jagged' capabilities, excelling at highly complex tasks like theoretical physics while failing at basic ones like counting objects. This inconsistent, non-human-like performance profile is a primary reason for polarized public and expert opinions on AI's actual utility.
Current AI models exhibit "jagged intelligence," performing at a PhD level on some tasks but failing at simple ones. Google DeepMind's CEO identifies this inconsistency and lack of reliability as a primary barrier to achieving true, general-purpose AGI.