Initial public fear over new technologies like AI therapy, while seemingly negative, is actually productive. It creates the social and political pressure needed to establish essential safety guardrails and regulations, ultimately leading to safer long-term adoption.
The common analogy of AI to electricity is dangerously rosy. AI is more like fire: a transformative tool that, if mismanaged or weaponized, can spread uncontrollably with devastating consequences. This mental model better prepares us for AI's inherent risks and accelerating power.
The research group initially avoided mental health due to high stakes. They reversed course because the trend was already happening without scientific guidance, making inaction the greater risk. The goal is to provide leadership where none exists.
Instead of defaulting to skepticism and looking for reasons why something won't work, the most productive starting point is to imagine how big and impactful a new idea could become. After exploring the optimistic case, you can then systematically address and mitigate the risks.
New technologies perceived as job-destroying, like AI, face significant public and regulatory risk. A powerful defense is to make the general public owners of the technology. When people have a financial stake in a technology's success, they are far more likely to defend it than fight against it.
Contrary to expectations, those closest to the mental health crisis (physicians, therapists) are the most optimistic about AI's potential. The AI scientists who build the underlying models are often the most scared, revealing a key disconnect between application and theory.
The rhetoric around AI's existential risks is framed as a competitive tactic. Some labs used these narratives to scare investors, regulators, and potential competitors away, effectively 'pulling up the ladder' to cement their market lead under the guise of safety.
Regulating technology based on anticipating *potential* future harms, rather than known ones, is a dangerous path. This 'precautionary principle,' common in Europe, stifles breakthrough innovation. If applied historically, it would have blocked transformative technologies like the automobile or even nuclear power, which has a better safety record than oil.
AI will create negative consequences, like the internet spawned the dark web. However, its potential to solve major problems like disease and energy scarcity makes its development a net positive for society, justifying the risks that must be managed along the way.
Despite broad, bipartisan public opposition to AI due to fears of job loss and misinformation, corporations and investors are rushing to adopt it. This push is not fueled by consumer demand but by a 'FOMO-driven gold rush' for profits, creating a dangerous disconnect between the technology's backers and the society it impacts.
Employees hesitate to use new AI tools for fear of looking foolish or getting fired for misuse. Successful adoption depends less on training courses and more on creating a safe environment with clear guardrails that encourages experimentation without penalty.