The winning vehicle in the 2005 DARPA self-driving challenge, led by future Waymo founder Sebastian Thrun, used a clever machine learning approach. It overlaid precise laser sensor data onto a regular video camera feed, teaching the system to recognize the color and texture of "safe" terrain and extrapolate a drivable path far ahead.

Related Insights

The shift to AI makes multi-sensor arrays (including LiDAR) more valuable. Unlike older rules-based systems where data fusion was complex, AI models benefit directly from more diverse input data. This improves the training of the core driving model, making a multi-sensor approach with increasingly cheap LiDAR more beneficial.

After proving its robo-taxis are 90% safer than human drivers, Waymo is now making them more "confidently assertive" to better navigate real-world traffic. This counter-intuitive shift from passive safety to calculated aggression is a necessary step to improve efficiency and reduce delays, highlighting the trade-offs required for autonomous vehicle integration.

Early self-driving cars were too cautious, becoming hazards on the road. By strictly adhering to the speed limit or being too polite at intersections, they disrupted traffic flow. Waymo learned its cars must drive assertively, even "aggressively," to safely integrate with human drivers.

While autonomous driving is complex, roboticist Ken Goldberg argues it's an easier problem than dexterous manipulation. Driving fundamentally involves avoiding contact with objects, whereas manipulation requires precisely controlled contact and interaction with them, a much harder challenge.

Rivian's CEO explains that early autonomous systems, which were based on rigid rules-based "planners," have been superseded by end-to-end AI. This new approach uses a large "foundation model for driving" that can improve continuously with more data, breaking through the performance plateau of the older method.

The AI's ability to handle novel situations isn't just an emergent property of scale. Waive actively trains "world models," which are internal generative simulators. This enables the AI to reason about what might happen next, leading to sophisticated behaviors like nudging into intersections or slowing in fog.

Initially criticized for forgoing expensive LIDAR, Tesla's vision-based self-driving system compelled it to solve the harder, more scalable problem of AI-based reasoning. This long-term bet on foundation models for driving is now converging with the direction competitors are also taking.

Waive treats the sensor debate as a distraction. Their goal is to build an AI flexible enough to work with any configuration—camera-only, camera-radar, or multi-sensor. This pragmatism allows them to adapt their software to different OEM partners and vehicle price points without being locked into a single hardware ideology.

A human driver's lesson from a mistake is isolated. In contrast, when one self-driving car makes an error and learns, the correction is instantly propagated to all other cars in the network. This collective learning creates an exponential improvement curve that individual humans cannot match.

Unlike older robots requiring precise maps and trajectory calculations, new robots use internet-scale common sense and learn motion by mimicking humans or simulations. This combination has “wiped the slate clean” for what is possible in the field.

Sebastian Thrun's 2005 DARPA-Winning Car Used Machine Learning to See Safe Paths | RiffOn