New and controversial fields face a difficult trade-off. Excessive caution means delaying action and allowing existing harms to continue. However, reckless action risks implementing counterproductive policies that become entrenched and hard to reverse, damaging the field's credibility. The key is finding a middle path of deliberate, monitored action.
Large enterprises navigate a critical paradox with new technology like AI. Moving too slowly cedes the market and leads to irrelevance. However, moving too quickly without clear direction or a focus on feasibility results in wasting millions of dollars on failed initiatives.
For highly complex and uncertain fields like wild animal welfare, avoid advocating for large, irreversible solutions. Instead, focus on small-scale, reversible actions that are plausibly beneficial (e.g., bird-safe glass). This approach allows for learning and builds momentum without risking catastrophic, unintended consequences.
Society rarely bans powerful new technologies, no matter how dangerous. Instead, like with fire, we develop systems to manage risk (e.g., fire departments, alarms). This provides a historical lens for current debates around transformative technologies like AI, suggesting adaptation over prohibition.
Lengauer outlines three models: 'fat' (bureaucratic), 'slim' (reckless), and 'responsible.' The ideal 'responsible' path is the hardest, requiring a 'nose for value' to make constant, difficult judgments about which steps are essential to move forward quickly but safely, without excessive bureaucracy or dangerous corner-cutting.
The "if one person dies, it's one too many" mentality, while sounding noble, is framed as a sign of poor leadership. Effective leaders must synthesize complex data and make decisions based on second and third-order effects, not just a single, emotionally resonant metric like zero risk.
Regulating technology based on anticipating *potential* future harms, rather than known ones, is a dangerous path. This 'precautionary principle,' common in Europe, stifles breakthrough innovation. If applied historically, it would have blocked transformative technologies like the automobile or even nuclear power, which has a better safety record than oil.
An initially moderate pessimistic stance on new technology often escalates into advocacy for draconian policies. The 1970s ban on civilian nuclear power is a prime example of a fear-based decision that created catastrophic long-term consequences, including strengthening geopolitical rivals.
In high-stakes fields like medtech, the "fail fast" startup mantra is irresponsible. The goal should be to "learn fast" instead—maximizing learning cycles internally through research and simulation to de-risk products before they have real-world consequences for patient safety.
Other scientific fields operate under a "precautionary principle," avoiding experiments with even a small chance of catastrophic outcomes (e.g., creating dangerous new lifeforms). The AI industry, however, proceeds with what Bengio calls "crazy risks," ignoring this fundamental safety doctrine.
When building a new and potentially controversial field, strategic prioritization is key. Start with issues that are familiar and relatable to a broader audience (e.g., bird-safe glass in cities) to build institutional support and avoid immediate alienation. This creates a foundation before exploring more radical or abstract concepts.