While today's computers cannot achieve AGI, it is not theoretically impossible. Creating a generally intelligent system will require a new physical substrate—likely biological or chemical—that can replicate the brain's enormous, dynamic configurational space, which silicon architecture cannot.

Related Insights

The human brain contains more potential connections than there are atoms in the universe. This immense, dynamic 'configurational space' is the source of its power, not raw processing speed. Silicon chips are fundamentally different and cannot replicate this morphing, high-dimensional architecture.

Human cognition is a full-body experience, not just a brain function. Current AIs are 'disembodied brains,' fundamentally limited by their lack of physical interaction with the world. Integrating AI into robotics is the necessary next step toward more holistic intelligence.

To achieve 1000x efficiency, Unconventional AI is abandoning the digital abstraction (bits representing numbers) that has defined computing for 80 years. Instead, they are co-designing hardware and algorithms where the physics of the substrate itself defines the neural network, much like a biological brain.

Language is just one 'keyhole' into intelligence. True artificial general intelligence (AGI) requires 'world modeling'—a spatial intelligence that understands geometry, physics, and actions. This capability to represent and interact with the state of the world is the next critical phase of AI development beyond current language models.

Today's AI models are powerful but lack a true sense of causality, leading to illogical errors. Unconventional AI's Naveen Rao hypothesizes that building AI on substrates with inherent time and dynamics—mimicking the physical world—is the key to developing this missing causal understanding.

The debate over whether "true" AGI will be a monolithic model or use external scaffolding is misguided. Our only existing proof of general intelligence—the human brain—is a complex, scaffolded system with specialized components. This suggests scaffolding is not a crutch for AI, but a natural feature of advanced intelligence.

Arvind Krishna firmly believes that today's LLM technology path is insufficient for reaching Artificial General Intelligence (AGI). He gives it extremely low odds, stating that a breakthrough will require fusing current models with structured, hard knowledge, a field known as neurosymbolic AI, before AGI becomes plausible.

Consciousness isn't an emergent property of computation. Instead, physical systems like brains—or potentially AI—act as interfaces. Creating a conscious AI isn't about birthing a new awareness from silicon, but about engineering a system that opens a new "portal" into the fundamental network of conscious agents that already exists outside spacetime.

Instead of seizing human industry, a superintelligent AI could leverage its understanding of biology to create its own self-replicating systems. It could design organisms to grow its computational hardware, a far faster and more efficient path to power than industrial takeover.

DeepMind's Shane Legg argues that human intelligence is not the upper limit because the brain is constrained by biology (20-watt power, slow electrochemical signals). Data centers have orders of magnitude advantages in power, bandwidth, and signal speed, making superhuman AI a physical certainty.