Fields like economics become ineffective when they prioritize conforming to disciplinary norms—like mathematical modeling—over solving complex, real-world problems. This professionalization creates monocultures where researchers focus on what is publishable within their field's narrow framework, rather than collaborating across disciplines to generate useful knowledge for issues like prison reform.
Nobel laureate Robert Solow critiques modern macroeconomic models (DSGE) for being overly abstract and failing to represent an economy with diverse actors and conflicting interests. By modeling a single representative agent, he argues, the field has detached itself from solving real-world economic problems.
Post-WWII, economists pursued mathematical rigor by modeling human behavior as perfectly rational (i.e., 'maximizing'). This was a convenient simplification for building models, not an accurate depiction of how people actually make decisions, which are often messy and imperfect.
Economist Michael Greenstone recounts how his academic communication style, efficient among peers, was perceived as abrasive and exclusionary in government, nearly getting him fired. To have real-world impact, experts must translate specialized jargon into accessible ideas, a skill academia doesn't teach or reward.
Focusing exclusively on one industry makes you an expert in a silo but blind to broader market shifts and innovations from other sectors. This intellectual laziness limits your ability to bring fresh perspectives to clients, making you less valuable and more replaceable than a well-rounded expert who can cross-pollinate ideas.
Experts often view problems through the narrow lens of their own discipline, a cognitive bias known as the "expertise trap" or Maslow's Law. This limits the tools and perspectives applied, leading to suboptimal solutions. The remedy is intentional collaboration with individuals who possess different functional toolkits.
Schooling has become a victim of Goodhart's Law. When a measure (grades, test scores) becomes a target, it ceases to be a good measure. Students become experts at 'doing school' — maximizing the signal — which is a separate skill from the actual creative and intellectual capabilities the system is supposed to foster.
Philosophy should have been central to AI's creation, but its academic siloing led to inaction. Instead of engaging with technology and building, philosophers remained focused on isolated cogitation. AI emerged from engineers who asked "what can I make?" rather than just asking "what is a mind?".
Much RL research from 2015-2022 has not proven useful in practice because academia rewards complex, math-heavy ideas. These provide implicit "knobs" to overfit benchmarks, while ignoring simpler, more generalizable approaches that may lack intellectual novelty.
When complex entities like universities are judged by simplified rankings (e.g., U.S. News), they learn to manipulate the specific inputs to the ranking formula. This optimizes their score without necessarily making them better institutions, substituting genuine improvement for the appearance of it.
Formally trained experts are often constrained by the fear of reputational damage if they propose "crazy" ideas. An outsider or "hacker" without these credentials has the freedom to ask naive but fundamental questions that can challenge core assumptions and unlock new avenues of thinking.