Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

For the military, the toughest AI adoption challenge isn't on offense, but defense: overcoming institutional resistance to granting AI the autonomy needed to defend networks at machine speed. A human-alert system is too slow, creating a major bureaucratic and command-and-control dilemma.

Related Insights

The military's primary incentive is to use weapons that are effective and reliable, as soldiers' lives depend on it. This inherent conservatism acts as a strong filter against deploying unproven or unpredictable AI systems, making them slower, not faster, to adopt bleeding-edge technology in life-or-death situations.

Military bureaucracy and resistance to new tech may create a "slow, slow, fast" adoption pattern. This prevents the development of a robust vetting culture, making institutions vulnerable when competitive pressure suddenly forces rapid, less-careful deployment of powerful AI systems.

Kevin Mandia predicts that within two years, all cyberattacks will be AI-driven. The sheer speed of these threats makes human-in-the-loop defense obsolete. The only viable response is a fully autonomous, AI-powered defensive system to counter AI-born threats.

Even if AI technology advances overnight, a state's ability to act on it is slowed by institutional factors. The need for testing, updating military doctrine, and securing political approval for a high-stakes action means that institutional adaptation will always lag technological progress.

The greatest risk to integrating AI in military systems isn't the technology itself, but the potential for one high-profile failure—a safety event or cyber breach—to trigger a massive regulatory overcorrection, pushing the entire field backward and ceding the advantage to adversaries.

Defense tech firm Smack Technologies clarifies the objective is not to remove humans entirely. Instead, AI should handle low-value tasks to free up personnel for critical, high-value decisions. This framework, 'intelligent autonomy,' orchestrates manned and unmanned systems while keeping humans in the loop.

The Department of Defense (DoD) doesn't need a "wake-up call" about AI's importance; it needs to "get out of bed." The critical failure is not a lack of awareness but deep-seated institutional inertia that prevents the urgent action and implementation required to build capability.

Adversaries are using AI to create an "asymptotic attack pressure" with novel exploits moving at machine speed. Traditional human-speed defense is insufficient. The solution is an autonomous defensive system that mirrors the attackers, creating a corresponding counter-pressure to analyze threats and respond in real-time.

Shield AI identifies the key problem in defense tech as simultaneously achieving high performance, ensuring high levels of safety and assurance, and maintaining rapid development cycles. Historically, systems had to trade these off, but modern defense requires solving for all three concurrently.

The policy of keeping a human decision-maker 'in the loop' for military AI is a potential failure point. If the human operator is not meaningfully engaged and simply accepts AI-generated recommendations without critical oversight or due diligence, the system is de facto autonomous, creating a false sense of security and accountability.