/
© 2026 RiffOn. All rights reserved.

Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

  1. The AI Policy Podcast
  2. A Crash Course on AI Standards with Google DeepMind's Owen Larter
A Crash Course on AI Standards with Google DeepMind's Owen Larter

A Crash Course on AI Standards with Google DeepMind's Owen Larter

The AI Policy Podcast · Mar 6, 2026

Google DeepMind's Owen Larter explains how AI standards for interoperability and safety are crucial for innovation, trust, and governance.

Nations Wield Technical Standards as a Subtle Form of Economic Protectionism

Beyond technical merit, standards can be a geopolitical tool. By creating unique national standards, like for electrical plugs or AI reporting, a country can favor its domestic manufacturers who are already compliant, creating a subtle but effective barrier for foreign competitors.

A Crash Course on AI Standards with Google DeepMind's Owen Larter thumbnail

A Crash Course on AI Standards with Google DeepMind's Owen Larter

The AI Policy Podcast·2 months ago

Control of Standard Essential Patents Creates Potent Geopolitical Leverage in Tech

When a company's patented technology becomes essential to an international standard (like 5G), it creates a chokepoint. The US leveraged this by imposing export controls on American firms like Qualcomm, preventing Chinese companies like ZTE from legally building standard-compliant cell phones globally.

A Crash Course on AI Standards with Google DeepMind's Owen Larter thumbnail

A Crash Course on AI Standards with Google DeepMind's Owen Larter

The AI Policy Podcast·2 months ago

Safe AI's Path Mirrors Electricity's Journey from Dangerous Novelty to Utility

Like early electricity, which caused fires and electrocutions, AI is a powerful, scary, and poorly understood technology. The historical process of making electricity safe through standards for measurement (Volts, Amps, Ohms) and devices (fuses) provides a clear roadmap for governing AI risks.

A Crash Course on AI Standards with Google DeepMind's Owen Larter thumbnail

A Crash Course on AI Standards with Google DeepMind's Owen Larter

The AI Policy Podcast·2 months ago

AI Adoption Hinges on Trust-Building Standards, Mirroring How HTTPS Unlocked E-commerce

Early internet users feared online payments until the HTTPS encryption standard provided a secure, trustworthy process. Similarly, broad AI adoption requires process standards for safety and risk management to build the public and enterprise trust necessary for a boom in the AI-enabled economy.

A Crash Course on AI Standards with Google DeepMind's Owen Larter thumbnail

A Crash Course on AI Standards with Google DeepMind's Owen Larter

The AI Policy Podcast·2 months ago

AI's Rapid Pace Makes Slow, Formal Standards Bodies like ISO Ineffective for Now

Formal standards development organizations (SDOs) like the ISO operate on a 12-24 month timeline. This deliberate, consensus-based process is too slow to keep pace with the rapid evolution of AI technology, creating a governance gap that requires more agile, iterative approaches.

A Crash Course on AI Standards with Google DeepMind's Owen Larter thumbnail

A Crash Course on AI Standards with Google DeepMind's Owen Larter

The AI Policy Podcast·2 months ago

Google Creates De Facto AI Standards Through Open Source to Bypass Slow Processes

Instead of waiting for formal bodies, Google DeepMind is developing and open-sourcing its own technical standards for AI agents. This strategy aims to solve immediate interoperability problems and establish a market-wide de facto standard through rapid, widespread adoption, bypassing slower, formal channels.

A Crash Course on AI Standards with Google DeepMind's Owen Larter thumbnail

A Crash Course on AI Standards with Google DeepMind's Owen Larter

The AI Policy Podcast·2 months ago

The EU AI Act's Rules Rely on Critical Safety Standards That Do Not Yet Exist

The EU AI Act mandates compliance with 'harmonized standards' for high-risk AI systems. However, many of these essential standards are still undeveloped, creating a high-stakes race for standards bodies to define the rules before the regulation is fully enforceable, effectively 'gesturing to things that have not yet been developed'.

A Crash Course on AI Standards with Google DeepMind's Owen Larter thumbnail

A Crash Course on AI Standards with Google DeepMind's Owen Larter

The AI Policy Podcast·2 months ago