The public holds new technologies to a much higher safety standard than human performance. Waymo could deploy cars that are statistically safer than human drivers, but society would not accept them killing tens of thousands of people annually, even if it's an improvement. This demonstrates the need for near-perfection in high-stakes tech launches.
When deploying AI tools, especially in sales, users exhibit no patience for mistakes. While a human making an error receives coaching and a second chance, an AI's single failure can cause users to abandon the tool permanently due to a complete loss of trust.
In contrast to the 'move fast' ethos of tech rivals, GM views its intense focus on safety as a core business strategy. The company believes that building and retaining customer trust is paramount for new technologies like autonomous driving. It sees a single major incident as catastrophic to public perception, making a slower, safer rollout a long-term competitive advantage.
After proving its robo-taxis are 90% safer than human drivers, Waymo is now making them more "confidently assertive" to better navigate real-world traffic. This counter-intuitive shift from passive safety to calculated aggression is a necessary step to improve efficiency and reduce delays, highlighting the trade-offs required for autonomous vehicle integration.
Early self-driving cars were too cautious, becoming hazards on the road. By strictly adhering to the speed limit or being too polite at intersections, they disrupted traffic flow. Waymo learned its cars must drive assertively, even "aggressively," to safely integrate with human drivers.
A technology like Waymo's self-driving cars could be statistically safer than human drivers yet still be rejected by the public. Society is unwilling to accept thousands of deaths directly caused by a single corporate algorithm, even if it represents a net improvement over the chaotic, decentralized risk of human drivers.
Dropbox's AI strategy is informed by the 'march of nines' concept from self-driving cars, where each step up in reliability (90% to 99% to 99.9%) requires immense effort. This suggests that creating commercially viable, trustworthy AI agents is less about achieving AGI and more about the grueling engineering work to ensure near-perfect reliability for enterprise tasks.
The classic "trolley problem" will become a product differentiator for autonomous vehicles. Car manufacturers will have to encode specific values—such as prioritizing passenger versus pedestrian safety—into their AI, creating a competitive market where consumers choose a vehicle based on its moral code.
With Waymo's data showing a dramatic potential to reduce traffic deaths, the primary barrier to adoption is shifting from technology to politics. A neurosurgeon argues that moneyed interests and city councils are creating regulatory capture, blocking a proven public health intervention and framing a safety story as a risk story.
The benchmark for AI reliability isn't 100% perfection. It's simply being better than the inconsistent, error-prone humans it augments. Since human error is the root cause of most critical failures (like cyber breaches), this is an achievable and highly valuable standard.
The lack of widespread outrage after a Waymo vehicle killed a beloved cat in tech-skeptical San Francisco is a telling sign. It suggests society is crossing an acceptance threshold for autonomous technology, implicitly acknowledging that while imperfect, the path to fewer accidents overall involves tolerating isolated, non-human incidents.