We scan new podcasts and send you the top 5 insights daily.
Unlike LLMs that train on the existing internet, robotics lacks a pre-training dataset for the physical world. This forces companies like Encore to build a full-stack solution combining a software platform for data management with human-led operations for data collection, annotation, and even real-time remote robot piloting for exception handling.
The primary challenge in robotics AI is the lack of real-world training data. To solve this, models are bootstrapped using a combination of learning from human lifestyle videos and extensive simulation environments. This creates a foundational model capable of initial deployment, which then generates a real-world data flywheel.
To overcome the data bottleneck in robotics, Sunday developed gloves that capture human hand movements. This allows them to train their robot's manipulation skills without needing a physical robot for teleoperation. By separating data gathering (gloves) from execution (robot), they can scale their training dataset far more efficiently than competitors who rely on robot-in-the-loop data collection methods.
The rapid progress of many LLMs was possible because they could leverage the same massive public dataset: the internet. In robotics, no such public corpus of robot interaction data exists. This “data void” means progress is tied to a company's ability to generate its own proprietary data.
Unlike consumer AI trained on public internet data, industrial AI requires vast, proprietary datasets from the physical world (e.g., sensor readings from a submarine hull). Gecko Robotics is building this data corpus via its robots, creating an advantage that's difficult to replicate.
Progress in robotics for household tasks is limited by a scarcity of real-world training data, not mechanical engineering. Companies are now deploying capital-intensive "in-field" teams to collect multi-modal data from inside homes, capturing the complexity of mundane human activities to train more capable robots.
The future of valuable AI lies not in models trained on the abundant public internet, but in those built on scarce, proprietary data. For fields like robotics and biology, this data doesn't exist to be scraped; it must be actively created, making the data generation process itself the key competitive moat.
While Figure's CEO criticizes competitors for using human operators in robot videos, this 'wizard of oz' technique is a critical data-gathering and development stage. Just as early Waymo cars had human operators, teleoperation is how companies collect the training data needed for true autonomy.
Ken Goldberg quantifies the challenge: the text data used to train LLMs would take a human 100,000 years to read. Equivalent data for robot manipulation (vision-to-control signals) doesn't exist online and must be generated from scratch, explaining the slower progress in physical AI.
General-purpose robotics lacks standardized interfaces between hardware, data, and AI. This makes a full-stack, in-house approach essential because the definition of 'good' for each component is constantly co-evolving. Partnering is difficult when your standard of quality is a moving target.
The "bitter lesson" (scale and simple models win) works for language because training data (text) aligns with the output (text). Robotics faces a critical misalignment: it's trained on passive web videos but needs to output physical actions in a 3D world. This data gap is a fundamental hurdle that pure scaling cannot solve.