Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

While AI progress is marketed in revolutionary "step-changes" (e.g., GPT-3 to GPT-4), the underlying reality is more like compounding interest. A continuous stream of small, incremental improvements are accumulating, and their combined effect is what creates the feeling of an exponential leap in capability over time.

Related Insights

AI adoption isn't linear. A small, 1% improvement in model capability can be the critical step that clears a usability hurdle, transforming a "toy" into a production-ready tool. This creates sudden, discontinuous leaps in market adoption that are hard to predict from capability trend lines alone.

The dramatic improvements from GPT-2 to GPT-4 were driven by a simple law: bigger models and more training data yielded better results. This trend has stopped. Recent attempts to scale even larger models have produced only marginal gains, forcing the industry into more complex, narrow optimizations instead of giant leaps.

The surprisingly smooth, exponential trend in AI capabilities is viewed as more than just a technical machine learning phenomenon. It reflects broader economic dynamics, such as competition between firms, resource allocation, and investment cycles. This economic underpinning suggests the trend may be more robust and systematic than if it were based on isolated technical breakthroughs alone.

The sudden arrival of powerful AI like GPT-3 was a non-repeatable event: training on the entire internet and all existing books. With this data now fully "eaten," future advancements will feel more incremental, relying on the slower process of generating new, high-quality expert data.

Citing Leopold Ashenbrenner's essay, the hosts argue that AI progress isn't linear. It relies on "unhovelers"—fundamental scientific discoveries like new attention mechanisms that unlock massive, non-linear gains, defying simple extrapolation of current trends.

Third-party tracker METR observed that model complexity was doubling every seven months. However, a recent proprietary model shattered this trend, demonstrating nearly double the expected capability for independent operation (15 hours vs. an expected 8). This signals that AI advancement is accelerating unpredictably, outpacing prior scaling laws.

While the long-term trend for AI capability shows a seven-month doubling time, data since 2024 suggests an acceleration to a four-month doubling time. This faster pace has been a much better predictor of recent model performance, indicating a potential shift to a super-exponential trajectory.

The media portrays AI development as volatile, with huge breakthroughs and sudden plateaus. The reality inside labs like OpenAI is a steady, continuous process of experimentation, stacking small wins, and consistent scaling. The internal experience is one of "chugging along."

The true measure of a new AI model's power isn't just improved benchmarks, but a qualitative shift in fluency that makes using previous versions feel "painful." This experiential gap, where the old model suddenly feels worse at everything, is the real indicator of a breakthrough.

Bret Taylor explains the perception that AI progress has stalled. While improvements for casual tasks like trip planning are marginal, the reasoning capabilities of newer models have dramatically improved for complex work like software development or proving mathematical theorems.