Tesla's camera-only system gives it a significant cost advantage over Waymo's LiDAR-equipped vehicles. However, current data shows a Waymo vehicle crashes every 400,000 miles, while Tesla's crashes every 50,000. Tesla's ability to scale hinges entirely on proving its cheaper technology can become as safe.
During a San Francisco power outage, Waymo's map-based cars failed while Teslas were reportedly unaffected. This suggests that end-to-end AI systems are less brittle and better at handling novel "edge cases" than more rigid, heuristic-based autonomous driving models.
After proving its robo-taxis are 90% safer than human drivers, Waymo is now making them more "confidently assertive" to better navigate real-world traffic. This counter-intuitive shift from passive safety to calculated aggression is a necessary step to improve efficiency and reduce delays, highlighting the trade-offs required for autonomous vehicle integration.
Early self-driving cars were too cautious, becoming hazards on the road. By strictly adhering to the speed limit or being too polite at intersections, they disrupted traffic flow. Waymo learned its cars must drive assertively, even "aggressively," to safely integrate with human drivers.
By eschewing expensive LiDAR, Tesla lowers production costs, enabling massive fleet deployment. This scale generates exponentially more real-world driving data than competitors like Waymo, creating a data advantage that will likely lead to market dominance in autonomous intelligence.
As tech giants like Google and Amazon assemble the key components of the autonomy stack (compute, software, connectivity), the real differentiator becomes the ability to manufacture cars at scale. Tesla's established manufacturing prowess is a massive advantage that others must acquire or build to compete.
A technology like Waymo's self-driving cars could be statistically safer than human drivers yet still be rejected by the public. Society is unwilling to accept thousands of deaths directly caused by a single corporate algorithm, even if it represents a net improvement over the chaotic, decentralized risk of human drivers.
Autonomous systems can perceive and react to dangers beyond human capability. The example of a Cybertruck autonomously accelerating to lessen the impact of a potential high-speed rear-end collision—a car the human driver didn't even see—showcases a level of predictive safety that humans cannot replicate, moving beyond simple accident avoidance.
Initially criticized for forgoing expensive LIDAR, Tesla's vision-based self-driving system compelled it to solve the harder, more scalable problem of AI-based reasoning. This long-term bet on foundation models for driving is now converging with the direction competitors are also taking.
The public holds new technologies to a much higher safety standard than human performance. Waymo could deploy cars that are statistically safer than human drivers, but society would not accept them killing tens of thousands of people annually, even if it's an improvement. This demonstrates the need for near-perfection in high-stakes tech launches.
The debate over putting cameras in a robot's palm is analogous to Tesla's refusal to use LIDAR. Ken Goldberg suggests that just as LIDAR helps with edge cases in driving, in-hand cameras provide crucial, low-cost data for manipulation. Musk's purist approach may be a self-imposed handicap in both domains.