The dream of a do-everything humanoid is a top-down approach that will take a long time. Roboticist Ken Goldberg argues for a bottom-up strategy: master specific, valuable tasks like folding clothes or making coffee reliably first. General intelligence will emerge from combining these skills over time.
OpenAI co-founder Ilya Sutskever suggests the path to AGI is not creating a pre-trained, all-knowing model, but an AI that can learn any task as effectively as a human. This reframes the challenge from knowledge transfer to creating a universal learning algorithm, impacting how such systems would be deployed.
The path to a general-purpose AI model is not to tackle the entire problem at once. A more effective strategy is to start with a highly constrained domain, like generating only Minecraft videos. Once the model works reliably in that narrow distribution, incrementally expand the training data and complexity, using each step as a foundation for the next.
The popular conception of AGI as a pre-trained system that knows everything is flawed. A more realistic and powerful goal is an AI with a human-like ability for continual learning. This system wouldn't be deployed as a finished product, but as a 'super-intelligent 15-year-old' that learns and adapts to specific roles.
Instead of a single "AGI" event, AI progress is better understood in three stages. We're in the "powerful tools" era. The next is "powerful agents" that act autonomously. The final stage, "autonomous organizations" that outcompete human-led ones, is much further off due to capability "spikiness."
Leading roboticist Ken Goldberg clarifies that while legged robots show immense progress in navigation, fine motor skills for tasks like tying shoelaces are far beyond current capabilities. This is due to challenges in sensing and handling deformable, unpredictable objects in the real world.
While autonomous driving is complex, roboticist Ken Goldberg argues it's an easier problem than dexterous manipulation. Driving fundamentally involves avoiding contact with objects, whereas manipulation requires precisely controlled contact and interaction with them, a much harder challenge.
The current focus on pre-training AI with specific tool fluencies overlooks the crucial need for on-the-job, context-specific learning. Humans excel because they don't need pre-rehearsal for every task. This gap indicates AGI is further away than some believe, as true intelligence requires self-directed, continuous learning in novel environments.
Ken Goldberg quantifies the challenge: the text data used to train LLMs would take a human 100,000 years to read. Equivalent data for robot manipulation (vision-to-control signals) doesn't exist online and must be generated from scratch, explaining the slower progress in physical AI.
The founder of robotics OS Lightberry argues that the industry's "ChatGPT moment" won't be when a robot can fold laundry. Instead, it will be when robots are commonly seen interacting with people in public roles—as shop assistants, event staff, or security—achieving social acceptance first.
The next leap in AI will come from integrating general-purpose reasoning models with specialized models for domains like biology or robotics. This fusion, creating a "single unified intelligence" across modalities, is the base case for achieving superintelligence.