The media portrays AI development as volatile, with huge breakthroughs and sudden plateaus. The reality inside labs like OpenAI is a steady, continuous process of experimentation, stacking small wins, and consistent scaling. The internal experience is one of "chugging along."

Related Insights

OpenAI intentionally releases powerful technologies like Sora in stages, viewing it as the "GPT-3.5 moment for video." This approach avoids "dropping bombshells" and allows society to gradually understand, adapt to, and establish norms for the technology's long-term impact.

The era of guaranteed progress by simply scaling up compute and data for pre-training is ending. With massive compute now available, the bottleneck is no longer resources but fundamental ideas. The AI field is re-entering a period where novel research, not just scaling existing recipes, will drive the next breakthroughs.

AI models will produce a few stunning, one-off results in fields like materials science. These isolated successes will trigger an overstated hype cycle proclaiming 'science is solved,' masking the longer, more understated trend of AI's true, profound, and incremental impact on scientific discovery.

OpenAI operates with a "truly bottoms-up" structure because it's impossible to create rigid long-term plans when model capabilities are advancing unpredictably. They aim fuzzily at a 1-year+ horizon but rely on empirical, rapid experimentation for short-term product development, embracing the uncertainty.

After facing backlash for over-promising on past releases, OpenAI has adopted a "low ball" communication strategy. The company intentionally underplayed the GPT-5.1 update to avoid being "crushed" by criticism when perceived improvements don't match the hype, letting positive user discoveries drive the narrative instead.

Companies like OpenAI and Anthropic are not just building better models; their strategic goal is an "automated AI researcher." The ability for an AI to accelerate its own development is viewed as the key to getting so far ahead that no competitor can catch up.

A key strategy for labs like Anthropic is automating AI research itself. By building models that can perform the tasks of AI researchers, they aim to create a feedback loop that dramatically accelerates the pace of innovation.

The discourse around AGI is caught in a paradox. Either it is already emerging, in which case it's less a cataclysmic event and more an incremental software improvement, or it remains a perpetually receding future goal. This captures the tension between the hype of superhuman intelligence and the reality of software development.

Despite a media narrative of AI stagnation, the reality is an accelerating arms race. A rapid-fire succession of major model updates from OpenAI (GPT-5.2), Google (Gemini 3), and Anthropic (Claude 4.5) within just months proves the pace of innovation is increasing, not slowing down.

The perception of stalled progress in GPT-5 is misleading. It stems from frequent, smaller updates that "boiled the frog," a technically flawed initial rollout where queries were sent to a weaker model, and advancements in specialized areas less visible to the average user.