The Pentagon may defend controversial "double tap" strikes, which kill survivors at sea, by arguing the second strike's purpose is to destroy the wreckage as a navigational hazard. This reframes the killing of survivors as incidental, attempting to sidestep war crime accusations.

Related Insights

Counterintuitively, Anduril views AI and autonomy not as an ethical liability, but as a way to better adhere to the ancient principles of Just War Theory. The goal is to increase precision and discrimination, reducing collateral damage and removing humans from dangerous jobs, thereby making warfare *more* ethical.

The White House and Pentagon are deliberately shifting blame for a controversial military strike onto a subordinate admiral. This tactic insulates political leaders like the Secretary of Defense, whose rocky tenure and past blunders created the context for such controversial actions, from accountability.

In global conflicts, a nation's power dictates its actions and outcomes, not moral righteousness. History shows powerful nations, like the U.S. using nuclear weapons, operate beyond conventional moral constraints, making an understanding of power dynamics more critical than moralizing.

AI companies engage in "safety revisionism," shifting the definition from preventing tangible harm to abstract concepts like "alignment" or future "existential risks." This tactic allows their inherently inaccurate models to bypass the traditional, rigorous safety standards required for defense and other critical systems.

The controversy surrounding a second drone strike to eliminate survivors highlights a flawed moral calculus. Public objection focuses on the *inefficiency* of the first strike, not the lethal action itself. This inconsistent reasoning avoids the fundamental ethical question of whether the strike was justified in the first place.

An ideologue, even an anarchist advocating against the state, may support a massive state action if it serves a higher strategic purpose—in this case, disrupting a system they oppose. The perceived hypocrisy is dismissed as irrelevant when compared to the desired outcome, framing it as a solution, not a preferred method of governance.

Countering the common narrative, Anduril views AI in defense as the next step in Just War Theory. The goal is to enhance accuracy, reduce collateral damage, and take soldiers out of harm's way. This continues a historical military trend away from indiscriminate lethality towards surgical precision.

A president can legally initiate military actions like a blockade without congressional approval by first designating the target regime as a 'Foreign Terrorist Organization.' This provides a separate legal playbook and set of executive powers, circumventing the formal declaration of war process.

The War Powers Resolution's 60-day limit is triggered by "hostilities." The Obama and Trump administrations exploited the term's ambiguity, arguing that military actions like drone strikes against an enemy that cannot retaliate do not count as "hostilities," thus avoiding the need for congressional authorization.

When complex situations are reduced to a single metric, strategy shifts from achieving the original goal to maximizing the metric itself. During the Vietnam War, using "body counts" as a proxy for success led to military decisions designed to increase casualties, not to win the war.