We scan new podcasts and send you the top 5 insights daily.
In autonomous systems, LIDAR is invaluable during R&D to provide per-pixel depth data. This data trains models so that cheaper, camera-only production vehicles can accurately infer depth. This makes LIDAR a temporary means to an end, not the final sensor suite.
The system demonstrates emergent capabilities beyond its explicit design. In one case, it detected a pedestrian obscured by a bus by interpreting extremely faint, noisy LiDAR signals that had bounced off the person's feet from underneath the bus, showcasing a profound level of environmental understanding.
The shift to AI makes multi-sensor arrays (including LiDAR) more valuable. Unlike older rules-based systems where data fusion was complex, AI models benefit directly from more diverse input data. This improves the training of the core driving model, making a multi-sensor approach with increasingly cheap LiDAR more beneficial.
Tesla's camera-only system gives it a significant cost advantage over Waymo's LiDAR-equipped vehicles. However, current data shows a Waymo vehicle crashes every 400,000 miles, while Tesla's crashes every 50,000. Tesla's ability to scale hinges entirely on proving its cheaper technology can become as safe.
A Waymo vehicle detected and reacted to a pedestrian completely occluded by a bus. The AI system achieved this by interpreting faint LiDAR reflections of the person's feet bouncing under the bus—a feat impossible for humans and a powerful demonstration of emergent capabilities.
By eschewing expensive LiDAR, Tesla lowers production costs, enabling massive fleet deployment. This scale generates exponentially more real-world driving data than competitors like Waymo, creating a data advantage that will likely lead to market dominance in autonomous intelligence.
While public focus is often on expensive sensors like LiDAR, Rivian's CEO states the onboard compute for AI inference is an order of magnitude more expensive than the entire perception stack. This cost reality drove Rivian to design its own chip in-house, enabling it to deploy high-level autonomy capabilities across all its vehicles affordably.
Waymo uses a foundation model to create specialized, high-capacity "teacher" models (Driver, Simulator, Critic) offline. These teachers then distill their knowledge into smaller, efficient "student" models that can run in real-time on the vehicle, balancing massive computational power with on-device constraints.
Initially criticized for forgoing expensive LIDAR, Tesla's vision-based self-driving system compelled it to solve the harder, more scalable problem of AI-based reasoning. This long-term bet on foundation models for driving is now converging with the direction competitors are also taking.
Waive treats the sensor debate as a distraction. Their goal is to build an AI flexible enough to work with any configuration—camera-only, camera-radar, or multi-sensor. This pragmatism allows them to adapt their software to different OEM partners and vehicle price points without being locked into a single hardware ideology.
The winning vehicle in the 2005 DARPA self-driving challenge, led by future Waymo founder Sebastian Thrun, used a clever machine learning approach. It overlaid precise laser sensor data onto a regular video camera feed, teaching the system to recognize the color and texture of "safe" terrain and extrapolate a drivable path far ahead.