The pursuit of AGI is misguided. The real value of AI lies in creating reliable, interpretable, and scalable software systems that solve specific problems, much like traditional engineering. The goal should be "Artificial Programmable Intelligence" (API), not AGI.
As AI models achieve previously defined benchmarks for intelligence (e.g., reasoning), their failure to generate transformative economic value reveals those benchmarks were insufficient. This justifies 'shifting the goalposts' for AGI. It is a rational response to realizing our understanding of intelligence was too narrow. Progress in impressiveness doesn't equate to progress in usefulness.
The concept of AGI is so ill-defined it becomes a catch-all for magical thinking, both utopian and dystopian. Casado argues it erodes the quality of discourse by preventing focus on concrete, solvable problems and measurable technological progress.
As models become more powerful, the primary challenge shifts from improving capabilities to creating better ways for humans to specify what they want. Natural language is too ambiguous and code too rigid, creating a need for a new abstraction layer for intent.
AI development history shows that complex, hard-coded approaches to intelligence are often superseded by more general, simpler methods that scale more effectively. This "bitter lesson" warns against building brittle solutions that will become obsolete as core models improve.
The popular conception of AGI as a pre-trained system that knows everything is flawed. A more realistic and powerful goal is an AI with a human-like ability for continual learning. This system wouldn't be deployed as a finished product, but as a 'super-intelligent 15-year-old' that learns and adapts to specific roles.
The popular concept of AGI as a static, all-knowing entity is flawed. A more realistic and powerful model is one analogous to a 'super intelligent 15-year-old'—a system with a foundational capacity for rapid, continual learning. Deployment would involve this AI learning on the job, not arriving with complete knowledge.
Current AI progress isn't true, scalable intelligence but a 'brute force' effort. Amjad Masad contends models improve via massive, manual data labeling and contrived RL environments for specific tasks, a method he calls 'functional AGI,' not a fundamental crack in understanding intelligence.
To effectively interact with the world and use a computer, an AI is most powerful when it can write code. OpenAI's thesis is that even agents for non-technical users will be "coding agents" under the hood, as code is the most robust and versatile way for AI to perform tasks.
Dr. Li views the distinction between AI and AGI as largely semantic and market-driven, rather than a clear scientific threshold. The original goal of AI research, dating back to Turing, was to create machines that can think and act like humans. The term "AGI" doesn't fundamentally change this North Star for scientists.
The focus on achieving Artificial General Intelligence (AGI) is a distraction. Today's AI models are already so capable that they can fundamentally transform business operations and workflows if applied to the right use cases.