With Waymo's data showing a dramatic potential to reduce traffic deaths, the primary barrier to adoption is shifting from technology to politics. A neurosurgeon argues that moneyed interests and city councils are creating regulatory capture, blocking a proven public health intervention and framing a safety story as a risk story.

Related Insights

When investing in high-risk, long-development categories like autonomous vehicles, the key signal is undeniable consumer pull. Once Waymo became the preferred choice in San Francisco, it validated the investment thesis despite a decade of development and high costs.

After proving its robo-taxis are 90% safer than human drivers, Waymo is now making them more "confidently assertive" to better navigate real-world traffic. This counter-intuitive shift from passive safety to calculated aggression is a necessary step to improve efficiency and reduce delays, highlighting the trade-offs required for autonomous vehicle integration.

Initial public fear over new technologies like AI therapy, while seemingly negative, is actually productive. It creates the social and political pressure needed to establish essential safety guardrails and regulations, ultimately leading to safer long-term adoption.

Early self-driving cars were too cautious, becoming hazards on the road. By strictly adhering to the speed limit or being too polite at intersections, they disrupted traffic flow. Waymo learned its cars must drive assertively, even "aggressively," to safely integrate with human drivers.

Regulating technology based on anticipating *potential* future harms, rather than known ones, is a dangerous path. This 'precautionary principle,' common in Europe, stifles breakthrough innovation. If applied historically, it would have blocked transformative technologies like the automobile or even nuclear power, which has a better safety record than oil.

The classic "trolley problem" will become a product differentiator for autonomous vehicles. Car manufacturers will have to encode specific values—such as prioritizing passenger versus pedestrian safety—into their AI, creating a competitive market where consumers choose a vehicle based on its moral code.

Despite rapid software advances like deep learning, the deployment of self-driving cars was a 20-year process because it had to integrate with the mature automotive industry's supply chains, infrastructure, and business models. This serves as a reminder that AI's real-world impact is often constrained by the readiness of the sectors it aims to disrupt.

An anonymous CEO of a leading AI company told Stuart Russell that a massive disaster is the *best* possible outcome. They believe it is the only event shocking enough to force governments to finally implement meaningful safety regulations, which they currently refuse to do despite private warnings.

Advocating for a single national AI policy is often a strategic move by tech lobbyists and friendly politicians to preempt and invalidate stricter regulations emerging at the state level. Under the guise of creating a unified standard, this approach effectively ensures the actual policy is weak or non-existent, allowing the industry to operate with minimal oversight.

The lack of widespread outrage after a Waymo vehicle killed a beloved cat in tech-skeptical San Francisco is a telling sign. It suggests society is crossing an acceptance threshold for autonomous technology, implicitly acknowledging that while imperfect, the path to fewer accidents overall involves tolerating isolated, non-human incidents.