Newer models like OpenAI's 5.2 can solve bugs that were previously impossible for AI by "thinking" for extended periods—up to 37 minutes in one example. This reframes latency not as a flaw, but as a necessary trade-off for tackling deep, complex problems.

Related Insights

"Amortized inference" bakes slow, deliberative reasoning into a fast, single-pass model. While the brain uses a mix, digital minds have a strong incentive to amortize more capabilities. This is because once a capability is baked in, the resulting model can be copied infinitely, unlike a biological brain.

As frontier AI models reach a plateau of perceived intelligence, the key differentiator is shifting to user experience. Low-latency, reliable performance is becoming more critical than marginal gains on benchmarks, making speed the next major competitive vector for AI products like ChatGPT.

OpenAI found that significant upgrades to model intelligence, particularly for complex reasoning, did not improve user engagement. Users overwhelmingly prefer faster, simpler answers over more accurate but time-consuming responses, a disconnect that benefited competitors like Google.

Models that generate "chain-of-thought" text before providing an answer are powerful but slow and computationally expensive. For tuned business workflows, the latency from waiting for these extra reasoning tokens is a major, often overlooked, drawback that impacts user experience and increases costs.

Progress in complex, long-running agentic tasks is better measured by tokens consumed rather than raw time. Improving token efficiency, as seen from GPT-5 to 5.1, directly enables more tool calls and actions within a feasible operational budget, unlocking greater capabilities.

Unlike previous models that frequently failed, Opus 4.5 allows for a fluid, uninterrupted coding process. The AI can build complex applications from a simple prompt and autonomously fix its own errors, representing a significant leap in capability and reliability for developers.

AI struggles with long-horizon tasks not just due to technical limits, but because we lack good ways to measure performance. Once effective evaluations (evals) for these capabilities exist, researchers can rapidly optimize models against them, accelerating progress significantly.

Companies like OpenAI and Anthropic are intentionally shrinking their flagship models (e.g., GPT-4.0 is smaller than GPT-4). The biggest constraint isn't creating more powerful models, but serving them at a speed users will tolerate. Slow models kill adoption, regardless of their intelligence.

AI progress was expected to stall in 2024-2025 due to hardware limitations on pre-training scaling laws. However, breakthroughs in post-training techniques like reasoning and test-time compute provided a new vector for improvement, bridging the gap until next-generation chips like NVIDIA's Blackwell arrived.

Recent AI breakthroughs aren't just from better models, but from clever 'architecture' or 'scaffolding' around them. For example, Claude Code 'cheats' its context window limit by taking notes, clearing its memory, and then reading the notes to resume work. This architectural innovation drives performance.