AI models are surprisingly strong at certain tasks but bafflingly weak at others. This 'jagged frontier' of capability means that experience with AI can be inconsistent. The only way to navigate it is through direct experimentation within one's own domain of expertise.

Related Insights

AI's capabilities are inconsistent; it excels at some tasks and fails surprisingly at others. This is the 'jagged frontier.' You can only discover where AI is useful and where it's useless by applying it directly to your own work, as you are the only one who can accurately judge its performance in your domain.

AI intelligence shouldn't be measured with a single metric like IQ. AIs exhibit "jagged intelligence," being superhuman in specific domains (e.g., mastering 200 languages) while simultaneously lacking basic capabilities like long-term planning, making them fundamentally unlike human minds.

The future of AI is hard to predict because increasing a model's scale often produces 'emergent properties'—new capabilities that were not designed or anticipated. This means even experts are often surprised by what new, larger models can do, making the development path non-linear.

Users frequently write off an AI's ability to perform a task after a single failure. However, with models improving dramatically every few months, what was impossible yesterday may be trivial today. This "capability blindness" prevents users from unlocking new value.

AI's capabilities are highly uneven. Models are already superhuman in specific domains like speaking 150 languages or possessing encyclopedic knowledge. However, they still fail at tasks typical humans find easy, such as continual learning or nuanced visual reasoning like understanding perspective in a photo.

Current AI models resemble a student who grinds 10,000 hours on a narrow task. They achieve superhuman performance on benchmarks but lack the broad, adaptable intelligence of someone with less specific training but better general reasoning. This explains the gap between eval scores and real-world utility.

With past shifts like the internet or mobile, we understood the physical constraints (e.g., modem speeds, battery life). With generative AI, we lack a theoretical understanding of its scaling potential, making it impossible to forecast its ultimate capabilities beyond "vibes-based" guesses from experts.

The perceived limits of today's AI are not inherent to the models themselves but to our failure to build the right "agentic scaffold" around them. There's a "model capability overhang" where much more potential can be unlocked with better prompting, context engineering, and tool integrations.

The true measure of a new AI model's power isn't just improved benchmarks, but a qualitative shift in fluency that makes using previous versions feel "painful." This experiential gap, where the old model suddenly feels worse at everything, is the real indicator of a breakthrough.

Current AI models exhibit "jagged intelligence," performing at a PhD level on some tasks but failing at simple ones. Google DeepMind's CEO identifies this inconsistency and lack of reliability as a primary barrier to achieving true, general-purpose AGI.