Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

The requirement for human responsibility in the use of force is not a new concept created for AI. It is governed by long-standing international humanitarian law and existing military policies. These foundational legal structures apply to all weapons, from bows to AI-drones, ensuring a commander is always accountable.

Related Insights

Regardless of an AI's capabilities, the human in the loop is always the final owner of the output. Your responsible AI principles must clearly state that using AI does not remove human agency or accountability for the work's accuracy and quality. This is critical for mitigating legal and reputational risks.

Counterintuitively, Anduril views AI and autonomy not as an ethical liability, but as a way to better adhere to the ancient principles of Just War Theory. The goal is to increase precision and discrimination, reducing collateral damage and removing humans from dangerous jobs, thereby making warfare *more* ethical.

Seemingly reasonable terms like 'no autonomous lethal weapons' are impossible for a private company to enforce. They require moral and legal judgments about warfare—like defining a civilian or collateral damage—that are the exclusive and complex domain of a sovereign government, not a tech vendor.

The debate around AI in warfare often misses that significant autonomy already exists. Systems like the Phalanx Gatling gun and "fire-and-forget" missiles, which operate without human supervision after launch, have been standard for decades, representing a baseline of existing automation.

The military doesn't need to invent safety protocols for AI from scratch. Its deeply ingrained culture of checks and balances, rigorous training, rules of engagement, and hierarchical approvals serve as powerful, pre-existing guardrails against the risks of imperfect autonomous systems.

Defense tech firm Smack Technologies clarifies the objective is not to remove humans entirely. Instead, AI should handle low-value tasks to free up personnel for critical, high-value decisions. This framework, 'intelligent autonomy,' orchestrates manned and unmanned systems while keeping humans in the loop.

The expert clarifies that "fully autonomous weapons" is a confusing term not used in official policy. The military has used "autonomous weapon systems"—defined as systems that select and engage targets without further human intervention after activation—since the 1980s, such as radar-guided munitions.

Countering the common narrative, Anduril views AI in defense as the next step in Just War Theory. The goal is to enhance accuracy, reduce collateral damage, and take soldiers out of harm's way. This continues a historical military trend away from indiscriminate lethality towards surgical precision.

When a highly autonomous AI fails, the root cause is often not the technology itself, but the organization's lack of a pre-defined governance framework. High AI independence ruthlessly exposes any ambiguity in responsibility, liability, and oversight that was already present within the company.

In operations, AI models like Anthropic's Claude are used for intelligence analysis, summarizing media chatter, and running simulations to aid commanders. They are not used for autonomous targeting; any outputs go through layers of human review before influencing battlefield decisions.