We scan new podcasts and send you the top 5 insights daily.
A study found that military trainees are substantially less prone to "automation bias"—the tendency to over-trust AI—than their civilian peers. Their training in high-stakes decision-making and warfighting appears to instill a healthy skepticism and caution that mitigates this cognitive bias.
Leaders are often trapped "inside the box" of their own assumptions when making critical decisions. By providing AI with context and assigning it an expert role (e.g., "world-class chief product officer"), you can prompt it to ask probing questions that reveal your biases and lead to more objective, defensible outcomes.
An Army Ranger's decision not to shoot a potential threat was based on the man singing—a bizarre action for an enemy scout. This highlights the reliance on broad contextual judgment that current autonomous weapons lack, emphasizing the life-or-death stakes of getting these decisions right.
A key challenge in AI adoption is not technological limitation but human over-reliance. 'Automation bias' occurs when people accept AI outputs without critical evaluation. This failure to scrutinize AI suggestions can lead to significant errors that a human check would have caught, making user training and verification processes essential.
While AI can inherit biases from training data, those datasets can be audited, benchmarked, and corrected. In contrast, uncovering and remedying the complex cognitive biases of a human judge is far more difficult and less systematic, making algorithmic fairness a potentially more solvable problem.
In a study comparing military captains and generals, novices used data to confirm their initial strategy. The more experienced generals used the same data to question their strategy, treating intuition as a starting point for inquiry, not a conclusion.
The military doesn't need to invent safety protocols for AI from scratch. Its deeply ingrained culture of checks and balances, rigorous training, rules of engagement, and hierarchical approvals serve as powerful, pre-existing guardrails against the risks of imperfect autonomous systems.
Don't blindly trust AI. The correct mental model is to view it as a super-smart intern fresh out of school. It has vast knowledge but no real-world experience, so its work requires constant verification, code reviews, and a human-in-the-loop process to catch errors.
Resistance to AI in the workplace is often misdiagnosed as fear of technology. It's more accurately understood as an individual's rational caution about institutional change and the career risk associated with championing automation that could alter their or their colleagues' roles.
A recent study found that AI assistants actually slowed down programmers working on complex codebases. More importantly, the programmers mistakenly believed the AI was speeding them up. This suggests a general human bias towards overestimating AI's current effectiveness, which could lead to flawed projections about future progress.
A study found evaluators rated AI-generated research ideas as better than those from grad students. However, when the experiments were conducted, human ideas produced superior results. This highlights a bias where we may favor AI's articulate proposals over more substantively promising human intuition.