Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

UL's relevance isn't based on a single source of power. It's a combination of insurance companies requiring certification to underwrite policies, various government agencies mandating it, and the high stakes of the US tort system, where certification can be a key defense in liability lawsuits.

Related Insights

The insurance industry acts as a powerful de facto regulator. As major insurers seek to exclude AI-related liabilities from policies, they could dramatically slow AI deployment because businesses will be unwilling to shoulder the unmitigated financial risk themselves.

Contrary to the belief that companies resist regulation, UL's customers often initiate the standards-creation process for new innovations. They view universal standards as a way to de-risk technology, ensure fair competition, and create a stable, trusted marketplace.

The model combines insurance (financial protection), standards (best practices), and audits (verification). Insurers fund robust standards, while enterprises comply to get cheaper insurance. This market mechanism aligns incentives for both rapid AI adoption and robust security, treating them as mutually reinforcing rather than a trade-off.

While foundation models carry systemic risk, AI applications make "thicker promises" to enterprises, like guaranteeing specific outcomes in customer support. This specificity creates more immediate and tangible business risks (e.g., brand disasters, financial errors), making the application layer the primary area where trust and insurance are needed now.

UL achieves widespread adoption not through federal law, but by embedding safety standards into a single major city's legislation (e.g., NYC for e-bikes). This forces manufacturers to adopt that standard globally to avoid producing multiple, costly product versions.

UL Solutions CEO Jennifer Scanlon enforces a strict policy of never overruling the scientific and engineering judgments of her lab technicians. This protects the integrity of their testing process, which is the foundation of the company's brand and business.

Without clear government standards for AI safety, there is no "safe harbor" from lawsuits. This makes it likely courts will apply strict liability, where a company is at fault even if not negligent. This legal uncertainty makes risk unquantifiable for insurers, forcing them to exit the market.

To compete with for-profit rivals, UL split its testing business (UL Solutions) from its standards and research arms. The for-profit company went public in a secondary offering, with the proceeds funding the non-profit's endowment to continue its safety science work.

Protecting the UL mark's value requires active enforcement. The company maintains a market surveillance and anti-counterfeiting team that collaborates with customs and competitors to find and legally pursue sellers on platforms like Amazon using fraudulent UL certifications.

Faced with non-deterministic AI models, UL's approach to safety certification isn't to test the code's output. It audits the development process, focusing on over 200 criteria for how humans make decisions about data veracity, bias, transparency, and privacy.