AI's capabilities are highly uneven. Models are already superhuman in specific domains like speaking 150 languages or possessing encyclopedic knowledge. However, they still fail at tasks typical humans find easy, such as continual learning or nuanced visual reasoning like understanding perspective in a photo.
DeepMind's Shane Legg argues that human intelligence is not the upper limit because the brain is constrained by biology (20-watt power, slow electrochemical signals). Data centers have orders of magnitude advantages in power, bandwidth, and signal speed, making superhuman AI a physical certainty.
Solving key AI weaknesses like continual learning or robust reasoning isn't just a matter of bigger models or more data. Shane Legg argues it requires fundamental algorithmic and architectural changes, such as building new processes for integrating information over time, akin to an episodic memory.
Shane Legg proposes "Minimal AGI" is achieved when an AI can perform the cognitive tasks a typical person can. It's not about matching Einstein, but about no longer failing at tasks we'd expect an average human to complete. This sets a more concrete and achievable initial benchmark for the field.
Contrary to popular belief, highly compensated cognitive work (lawyers, software engineers, financiers) is the most exposed to AI disruption. If a job can be done remotely with just a laptop, an advanced AI can likely operate in that same space. Physical jobs requiring robotics will be protected for longer due to cost and complexity.
Instead of relying on instinctual "System 1" rules, advanced AI should use deliberative "System 2" reasoning. By analyzing consequences and applying ethical frameworks—a process called "chain of thought monitoring"—AIs could potentially become more consistently ethical than humans who are prone to gut reactions.
Shane Legg observes that non-technical people often recognize AI's general intelligence because it already surpasses them in many areas. In contrast, experts in specific fields tend to believe their domain is too unique to be impacted, underestimating the technology's rapid, exponential progress while clinging to outdated experiences.
Shane Legg suggests a two-phase test for "Minimal AGI." First, it must pass a broad suite of tasks that typical humans can do. Second, an adversarial team gets months to probe the AI, looking for any cognitive task a typical person can do that the AI cannot. If they fail to find one, the AI passes.
