We scan new podcasts and send you the top 5 insights daily.
Large Language Models are poor at predicting Supreme Court outcomes because they are trained on media coverage that increasingly and incorrectly portrays the court as a purely political body. The AI reflects our own biased assumption that every case is decided along a 6-3 ideological split, which is a rare outcome.
A core debate in AI is whether LLMs, which are text prediction engines, can achieve true intelligence. Critics argue they cannot because they lack a model of the real world. This prevents them from making meaningful, context-aware predictions about future events—a limitation that more data alone may not solve.
AI models trained on sources like Wikipedia inherit their biases. Wikipedia's policy of not allowing citations from leading conservative publications means these viewpoints are systematically excluded from training data, creating an inherent left-leaning bias in the resulting AI models.
AI models reason well on Supreme Court cases by interpolating the vast public analysis written about them. For more obscure cases lacking this corpus of secondary commentary, the models' reasoning ability falls off dramatically, even if the primary case data is available.
When AI systems are trained on historical data, such as past hiring or policing records, they learn and perpetuate existing societal biases. This creates a dangerous illusion of objectivity, where discriminatory outcomes are presented as neutral, data-driven "predictions" by an algorithm.
AI thrives in domains with fixed, written rules and searchable histories, like programming. In ambiguous areas like organizational conflict or political negotiation, where context is unwritten and lives in people's heads, its performance plummets. Its confident output masks this unreliability, posing a danger to decision-makers.
AI models are not optimized to find objective truth. They are trained on biased human data and reinforced to provide answers that satisfy the preferences of their creators. This means they inherently reflect the biases and goals of their trainers rather than an impartial reality.
The AI landscape won't be dominated by a single, monolithic LLM. Instead, models will fragment to serve specific markets, catering to different geographic, political, or business audiences. This will create inherent biases in each model, similar to how consumers choose different news channels today.
The Economist's AI tool, SCOTUSBOT, successfully predicted the outcome of a major Supreme Court tariff case. It initially favored Trump but reversed its forecast after analyzing case briefs, becoming even more confident after processing the oral argument transcript, demonstrating AI's predictive power in law.
All data inputs for AI are inherently biased (e.g., bullish management, bearish former employees). The most effective approach is not to de-bias the inputs but to use AI to compare and contrast these biased perspectives to form an independent conclusion.
Generative AI models are trained on existing human-generated text, causing them to reflect and amplify mainstream thought. When prompted on contrarian topics, they will either omit them or frame them as fringe ideas. AI is a tool for understanding the consensus view, not for generating truly original, non-consensus insights.