The government's core model for funding, oversight, and talent management is a relic of the post-WWII industrial era. Slapping modern technology like AI onto this outdated 'operating system' is a recipe for failure. A fundamental backend overhaul is required, not just a frontend facelift.
The failure of government systems isn't a 'set it and forget it' problem. Rather, it's a 'set it and accrete' problem. New rules, processes, and technologies are continuously layered on top of old ones for decades without ever subtracting anything, resulting in unmanageable, brittle systems.
AI is more than a tool for modernizing government services. It's a disruptive force that changes society's needs, compelling government to ask if its existing programs are even the right ones. For instance, is unemployment insurance the correct response to permanent, AI-driven job displacement?
A common mistake leaders make is buying powerful AI tools and forcing them into outdated processes, leading to failed pilots and wasted money. True transformation requires reimagining how people think, collaborate, and work *before* inserting revolutionary technology, not after.
The narrative blaming AI for job insecurity is misdirected. The true cause is decades of government promising services it can't efficiently deliver, leading to inflation and distorted markets. AI is a convenient, visible target for problems with deeper roots in policy.
The 'FDA for AI' analogy is flawed because the FDA's rigid, one-drug-one-disease model is ill-suited for a general-purpose technology. This structure struggles with modern personalized medicine, and a similar top-down regime for AI could embed faulty assumptions, stifling innovation and adaptability for a rapidly evolving field.
Technology only adds value if it overcomes a constraint. However, organizations build rules and processes (e.g., annual budgeting) to cope with past limitations (e.g., slow data collection). Implementing powerful new tech like AI will fail to deliver ROI if these legacy rules aren't also changed.
The Department of Defense (DoD) doesn't need a "wake-up call" about AI's importance; it needs to "get out of bed." The critical failure is not a lack of awareness but deep-seated institutional inertia that prevents the urgent action and implementation required to build capability.
Adopting AI acts as a powerful diagnostic tool, exposing an organization's "ugly underbelly." It highlights pre-existing weaknesses in company culture, inter-departmental collaboration, data quality, and the tech stack. Success requires fixing these fundamentals first.
AI systems often collapse because they are built on the flawed assumption that humans are logical and society is static. Real-world failures, from Soviet economic planning to modern systems, stem from an inability to model human behavior, data manipulation, and unexpected events.
The excitement around AI capabilities often masks the real hurdle to enterprise adoption: infrastructure. Success is not determined by the model's sophistication, but by first solving foundational problems of security, cost control, and data integration. This requires a shift from an application-centric to an infrastructure-first mindset.