Unlike cloud-reliant AI, Figure's humanoids perform all computations onboard. This is a critical architectural choice to enable high-frequency (200Hz+) control loops for balance and manipulation, ensuring the robot remains fully functional and responsive without depending on Wi-Fi or 5G connectivity.

Related Insights

Human cognition is a full-body experience, not just a brain function. Current AIs are 'disembodied brains,' fundamentally limited by their lack of physical interaction with the world. Integrating AI into robotics is the necessary next step toward more holistic intelligence.

Figure chose to develop its AI systems in-house rather than rely on its partnership with OpenAI. The reason was that its own team proved superior at the highly specialized task of designing, embedding, and running models on physical robot hardware, a challenge distinct from training purely digital LLMs.

Unlike fixed industrial robots, a simple emergency power-off is unsafe for humanoids. They require constant energy to balance, so an emergency stop would cause them to fall over, creating a new and unpredictable hazard. This fundamental difference requires an entirely new set of safety protocols for the industry.

AR and robotics are bottlenecked by software's inability to truly understand the 3D world. Spatial intelligence is positioned as the fundamental operating system that connects a device's digital "brain" to physical reality. This layer is crucial for enabling meaningful interaction and maturing the hardware platforms.

Moving a robot from a lab demo to a commercial system reveals that AI is just one component. Success depends heavily on traditional engineering for sensor calibration, arm accuracy, system speed, and reliability. These unglamorous details are critical for performance in the real world.

GM's next-generation platform, debuting in 2028, centralizes all vehicle compute and uses Ethernet networking. This isn't just about more processing power; it enables sub-millisecond response times for dynamic systems like suspension, a 10x improvement. This architecture abstracts hardware from software, allowing for much faster and more comprehensive over-the-air updates.

Classical robots required expensive, rigid, and precise hardware because they were blind. Modern AI perception acts as 'eyes', allowing robots to correct for inaccuracies in real-time. This enables the use of cheaper, compliant, and inherently safer mechanical components, fundamentally changing hardware design philosophy.

A humanoid robot with 40 joints has more potential positions than atoms in the universe (360^40). This combinatorial explosion makes it impossible to solve movement and interaction with traditional, hard-coded rules. Consequently, advanced AI like neural networks are not just an optimization but a fundamental necessity.

Unlike older robots requiring precise maps and trajectory calculations, new robots use internet-scale common sense and learn motion by mimicking humans or simulations. This combination has “wiped the slate clean” for what is possible in the field.

NVIDIA's robotics strategy extends far beyond just selling chips. By unveiling a suite of models, simulation tools (Cosmos), and an integrated ecosystem (Osmo), they are making a deliberate play to own the foundational platform for physical AI, positioning themselves as the default 'operating system' for the entire robotics industry.