Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Formal standards development organizations (SDOs) like the ISO operate on a 12-24 month timeline. This deliberate, consensus-based process is too slow to keep pace with the rapid evolution of AI technology, creating a governance gap that requires more agile, iterative approaches.

Related Insights

Large enterprises navigate a critical paradox with new technology like AI. Moving too slowly cedes the market and leads to irrelevance. However, moving too quickly without clear direction or a focus on feasibility results in wasting millions of dollars on failed initiatives.

While a unified data platform is non-negotiable for AI, leaders should resist standardizing AI tools and frameworks too early. Given the rapid pace of innovation, it's better to allow for experimentation and "let the flowers bloom." This dual approach—a stable data foundation with flexible tooling—enables both governance and agility.

Unlike mature tech products with annual releases, the AI model landscape is in a constant state of flux. Companies are incentivized to launch new versions immediately to claim the top spot on performance benchmarks, leading to a frenetic and unpredictable release schedule rather than a stable cadence.

In the AI era, the pace of change is so fast that by the time academic studies on "what works" are published, the underlying technology is already outdated. Leaders must therefore rely on conviction and rapid experimentation rather than waiting for validated evidence to act.

The rapid pace of change in AI renders long-term strategic planning ineffective. With foundational technology shifts occurring quarterly, companies must adopt a fluid approach. Strategy should focus on core principles and institutional memory, while remaining flexible enough to integrate new tech and iterate on tactics constantly.

The EU AI Act mandates compliance with 'harmonized standards' for high-risk AI systems. However, many of these essential standards are still undeveloped, creating a high-stakes race for standards bodies to define the rules before the regulation is fully enforceable, effectively 'gesturing to things that have not yet been developed'.

Unlike traditional internet protocols that matured slowly, AI technologies are advancing at an exponential rate. An AI standards body must operate at a much higher velocity. The Agentic AI Foundation is structured to facilitate this rapid, "dog years" pace of development, which is essential to remain relevant.

Traditional education systems, with curriculum changes taking five or more years, are fundamentally incompatible with the rapid evolution of AI. Analyst Johan Falk argues that building systemic agility is the most critical and difficult challenge for education leaders.

The AI space moves too quickly for slow, consensus-driven standards bodies like the IETF. MCP opted for a traditional open-source model with a small core maintainer group that makes final decisions. This hybrid of consensus and dictatorship enables the rapid iteration necessary to keep pace with AI advancements.

Instead of waiting for formal bodies, Google DeepMind is developing and open-sourcing its own technical standards for AI agents. This strategy aims to solve immediate interoperability problems and establish a market-wide de facto standard through rapid, widespread adoption, bypassing slower, formal channels.