While discourse often focuses on exponential growth, the AI Safety Report presents 'progress stalls' as a serious scenario, analogous to passenger aircraft speed, which plateaued after 1960. This highlights that continued rapid advancement is not guaranteed due to potential technical or resource bottlenecks.
Contrary to the feeling of rapid technological change, economic data shows productivity growth has been extremely low for 50 years. AI is not just another incremental improvement; it's a potential shock to a long-stagnant system, which is crucial context for its impact.
The future of AI is hard to predict because increasing a model's scale often produces 'emergent properties'—new capabilities that were not designed or anticipated. This means even experts are often surprised by what new, larger models can do, making the development path non-linear.
The advancement of AI is not linear. While the industry anticipated a "year of agents" for practical assistance, the most significant recent progress has been in specialized, academic fields like competitive mathematics. This highlights the unpredictable nature of AI development.
With past shifts like the internet or mobile, we understood the physical constraints (e.g., modem speeds, battery life). With generative AI, we lack a theoretical understanding of its scaling potential, making it impossible to forecast its ultimate capabilities beyond "vibes-based" guesses from experts.
Concerns about AI's negative effects, like cognitive offloading in students, are valid but should be analyzed separately from the objective advancements in AI capabilities, which continue on a strong upward trend. Conflating the two leads to flawed conclusions about progress stalling.
The true exponential acceleration towards AGI is currently limited by a human bottleneck: our speed at prompting AI and, more importantly, our capacity to manually validate its work. The hockey stick growth will only begin when AI can reliably validate its own output, closing the productivity loop.
The media portrays AI development as volatile, with huge breakthroughs and sudden plateaus. The reality inside labs like OpenAI is a steady, continuous process of experimentation, stacking small wins, and consistent scaling. The internal experience is one of "chugging along."
AI capabilities will improve dramatically by 2026, creating a sense of rapid advancement. However, achieving Artificial General Intelligence (AGI) is proving far more complex than predicted, and it will not be realized by 2027. The pace of progress and the difficulty of AGI are two distinct, coexisting truths.
For any given failure mode, there is a point where further technical research stops being the primary solution. Risks become dominated by institutional or human factors, such as a company's deliberate choice not to prioritize safety. At this stage, policy and governance become more critical than algorithms.
The history of nuclear power, where regulation transformed an exponential growth curve into a flat S-curve, serves as a powerful warning for AI. This suggests that AI's biggest long-term hurdle may not be technical limits but regulatory intervention that stifles its potential for a "fast takeoff," effectively regulating it out of rapid adoption.