/
© 2026 RiffOn. All rights reserved.

Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

  1. The Road to Accountable AI
  2. Michael Horowitz, UPenn: Governing AI That's Designed to Kill
Michael Horowitz, UPenn: Governing AI That's Designed to Kill

Michael Horowitz, UPenn: Governing AI That's Designed to Kill

The Road to Accountable AI · Mar 26, 2026

UPenn expert Michael Horowitz demystifies military AI, from autonomous weapons to the Pentagon's accountability frameworks and the Anthropic clash.

Anthropic's Pentagon Dispute Is About Technical Readiness, Not Moral Opposition

Anthropic does not have a philosophical objection to autonomous weapons. Their controversial stance is that their LLM, Claude, is currently not reliable enough for such high-stakes tasks. They are willing to work with the Pentagon to improve it, making the conflict a technical disagreement, not a moral one.

Michael Horowitz, UPenn: Governing AI That's Designed to Kill thumbnail

Michael Horowitz, UPenn: Governing AI That's Designed to Kill

The Road to Accountable AI·a day ago

Military AI Is Primarily for Data Analysis, Not Autonomous Weapons

The military's AI use is overwhelmingly focused on non-lethal applications like logistics and processing intelligence data. The 'pointy end' of autonomous weapons represents just one small category within a much broader AI strategy that mirrors corporate use cases.

Michael Horowitz, UPenn: Governing AI That's Designed to Kill thumbnail

Michael Horowitz, UPenn: Governing AI That's Designed to Kill

The Road to Accountable AI·a day ago

The Pentagon Uses 'Human Responsibility' Doctrine to Avoid an AI Accountability Gap

To prevent a scenario where 'the algorithm did it,' the U.S. military relies on the legal principle of 'human responsibility for the use of force.' This ensures a specific commander is always accountable for deploying any weapon, autonomous or not, sidestepping the accountability gap that worries AI ethicists.

Michael Horowitz, UPenn: Governing AI That's Designed to Kill thumbnail

Michael Horowitz, UPenn: Governing AI That's Designed to Kill

The Road to Accountable AI·a day ago

Ukraine's 'Last Mile Autonomy' Is a Tactical Response to Electronic Warfare

Ukraine is pioneering 'last mile autonomy' not as a strategic push for automation, but as a tactical necessity. When Russia jams the data link to a drone, the system can autonomously complete the final leg of its attack on a pre-identified target, countering electronic warfare.

Michael Horowitz, UPenn: Governing AI That's Designed to Kill thumbnail

Michael Horowitz, UPenn: Governing AI That's Designed to Kill

The Road to Accountable AI·a day ago

Autonomous Weapons Have Been Deployed By 40 Militaries for Over 40 Years

The public fear of 'killer robots' overlooks history. Systems like the U.S. Navy's Phalanx CIWS, used since the 1980s by dozens of countries, can autonomously select and engage incoming threats. The current debate is about the sophistication of the algorithms, not the concept itself.

Michael Horowitz, UPenn: Governing AI That's Designed to Kill thumbnail

Michael Horowitz, UPenn: Governing AI That's Designed to Kill

The Road to Accountable AI·a day ago

Controversies Over Military AI Can Mask Deeper 'Rules of Engagement' Issues

Debates over systems like Israel's 'Lavender' often focus on the AI. However, the more critical issue may be the human-defined 'rules of engagement'—specifically, what level of algorithmic confidence (e.g., 55% accuracy) leadership deems acceptable to authorize a strike. This is a policy problem, not just a technology one.

Michael Horowitz, UPenn: Governing AI That's Designed to Kill thumbnail

Michael Horowitz, UPenn: Governing AI That's Designed to Kill

The Road to Accountable AI·a day ago

The True Risk of China's Military AI Is Its Autocratic System, Not Its Stated Doctrine

While China's official doctrine on responsible military AI appears similar to that of the U.S., the real concern stems from its political structure. An autocratic regime's incentive to centralize power by removing human decision-makers could lead it to deploy unsafe AI systems, regardless of official policy.

Michael Horowitz, UPenn: Governing AI That's Designed to Kill thumbnail

Michael Horowitz, UPenn: Governing AI That's Designed to Kill

The Road to Accountable AI·a day ago