Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

The traditional government model of setting a regulation and waiting years to assess it is obsolete for AI. A new approach is needed: a dynamic board of government, industry, and academic leaders collaborating to make and update rules in real-time.

Related Insights

Formal standards development organizations (SDOs) like the ISO operate on a 12-24 month timeline. This deliberate, consensus-based process is too slow to keep pace with the rapid evolution of AI technology, creating a governance gap that requires more agile, iterative approaches.

In the AI era, the pace of change is so fast that by the time academic studies on "what works" are published, the underlying technology is already outdated. Leaders must therefore rely on conviction and rapid experimentation rather than waiting for validated evidence to act.

Traditional regulation is ill-equipped for AI's complexity and opacity. The podcast proposes a new model inspired by the Federal Reserve's oversight of banks: embedding technically-expert supervisors full-time inside major AI labs. This would allow for proactive monitoring of internal risk models and decisions, rather than just reacting to disasters after they occur.

Instead of only using AI to help people comply with complex regulations, its real power lies in helping policymakers simplify them. AI can analyze thousands of pages of rules to identify what is vestigial, conflicting, or redundant, enabling the simplification required for scalable government services.

MedTech AI companies can speed up regulatory approval by building a trusted, real-time post-market surveillance system. This shifts the burden of proof from pre-market studies to continuous real-world evidence, giving regulators the confidence to approve innovations faster, turning them from blockers into partners.

The mismatch between exponentially advancing AI and slow, "medieval" institutions is a core risk. Instead of only focusing on recursively self-improving AI, we should apply technology to create self-improving governance systems that can adapt and update at the same speed as the challenges they face.

Unlike traditional internet protocols that matured slowly, AI technologies are advancing at an exponential rate. An AI standards body must operate at a much higher velocity. The Agentic AI Foundation is structured to facilitate this rapid, "dog years" pace of development, which is essential to remain relevant.

Our legal framework, which relies on precedent and slow, deliberate change, cannot keep up with the exponential advancement of AI. This fundamental mismatch creates a regulatory crisis where laws are instantly obsolete, suggesting the need for a new paradigm like 'lightning round legislation' to govern emerging tech.

Because AI is so new, there are no established best practices or regulations for its use. This creates a critical but temporary window where every organization's choices matter more. The precedents set now by early adopters in business, government, and education will significantly influence how AI is integrated into society.

The rapid pace of AI development has outstripped government's ability to regulate. In this vacuum, the idea of AI companies writing their own binding constitutions emerges. While not a substitute for democratic oversight, these frameworks are presented as a necessary, if imperfect, mechanism to impose limits on corporate power before formal legislation can catch up.