Counterintuitively, Anduril views AI and autonomy not as an ethical liability, but as a way to better adhere to the ancient principles of Just War Theory. The goal is to increase precision and discrimination, reducing collateral damage and removing humans from dangerous jobs, thereby making warfare *more* ethical.

Related Insights

Current AI alignment focuses on how AI should treat humans. A more stable paradigm is "bidirectional alignment," which also asks what moral obligations humans have toward potentially conscious AIs. Neglecting this could create AIs that rationally see humans as a threat due to perceived mistreatment.

The project of creating AI that 'learns to be good' presupposes that morality is a real, discoverable feature of the world, not just a social construct. This moral realist stance posits that moral progress is possible (e.g., abolition of slavery) and that arrogance—the belief one has already perfected morality—is a primary moral error to be avoided in AI design.

The creation of potentially harmful technology, like AI-powered bot farms, is framed as a necessary evil. The argument is for the US to govern and control such tech, it must lead its development, preventing foreign adversaries from dominating a technology that has already 'wreaked havoc.'

Anduril's autonomous Fury fighter jet flies alongside manned aircraft as a force multiplier. It extends the pilot's sensor and weapons range while taking on high-risk maneuvers. This allows for strategies that involve sacrificing autonomous assets to gain an advantage, without the ethical problem of losing human lives.

AI companies engage in "safety revisionism," shifting the definition from preventing tangible harm to abstract concepts like "alignment" or future "existential risks." This tactic allows their inherently inaccurate models to bypass the traditional, rigorous safety standards required for defense and other critical systems.

The classic "trolley problem" will become a product differentiator for autonomous vehicles. Car manufacturers will have to encode specific values—such as prioritizing passenger versus pedestrian safety—into their AI, creating a competitive market where consumers choose a vehicle based on its moral code.

New technology can ignite violent conflict by making ideological differences concrete and non-negotiable. The printing press did this with religion, leading to one of Europe's bloodiest wars. AI could do the same by forcing humanity to confront divisive questions like transhumanism and the definition of humanity, potentially leading to similar strife.

Countering the common narrative, Anduril views AI in defense as the next step in Just War Theory. The goal is to enhance accuracy, reduce collateral damage, and take soldiers out of harm's way. This continues a historical military trend away from indiscriminate lethality towards surgical precision.

As autonomous weapon systems become increasingly lethal, the battlefield will be too dangerous for human soldiers. The founder of Allen Control Systems argues that conflict will transform into 'robot on robot action,' where victory is determined not by soldiers, but by which nation can produce the most effective systems at the lowest cost.

The rise of drones is more than an incremental improvement; it's a paradigm shift. Warfare is moving from human-manned systems where lives are always at risk to autonomous ones where mission success hinges on technological reliability. This changes cost-benefit analyses and reduces direct human exposure in conflict.