Every research paper presented at major conferences is paired with an official critic, or "discussant." This person's job is to translate the work for a broader audience, identify key takeaways, and provide constructive, public feedback, ensuring rigor and clarity.
Tyler Cowen argues the AI risk community's reluctance to engage in formal peer review weakens their arguments. Unlike fields like climate change, which built a robust literature, the movement's reliance on online discourse lacks the rigorous scrutiny needed to build credible scientific consensus.
John Martinis reveals that the Nobel system uses specialized symposiums not just to assess a scientific field's importance, but also to vet potential laureates. These events allow the committee to evaluate candidates' presentation skills and suitability as public representatives for science, acting as an informal screening process.
Before publishing, feed your work to an AI and ask it to find all potential criticisms and holes in your reasoning. This pre-publication stress test helps identify blind spots you would otherwise miss, leading to stronger, more defensible arguments.
A key feature making economics research robust is its structure. Authors not only present their thesis and evidence but also anticipate and systematically discredit competing theories for the same outcome. This intellectual honesty is a model other social sciences could adopt to improve credibility.
The economics profession is increasingly aware that a harsh seminar climate stifles risk-taking and learning. As a result, there's a conscious shift towards maintaining a more civilized and constructive environment during public research presentations, moving away from public humiliations.
To ensure rigorous vetting of ideas, create an environment of friendly competition between teams. This structure naturally motivates each group to find flaws in the other's thinking, a process that might be socially awkward in a purely collaborative setting. The result is a more robust, error-checked outcome.
The "99% Invisible" podcast subjects every script to a live table read where the entire staff provides hundreds of written comments in a shared document. This process is intensely rigorous but culturally gentle, focusing on elevating the story without personal criticism.
For AI systems to be adopted in scientific labs, they must be interpretable. Researchers need to understand the 'why' behind an AI's experimental plan to validate and trust the process, making interpretability a more critical feature than raw predictive power.
These events are not just academic exercises. They are where initial, data-driven ideas that will shape future monetary and economic policy are first presented, critiqued, and refined by peers, serving as the first draft of policy debates.
Define different agents (e.g., Designer, Engineer, Executive) with unique instructions and perspectives, then task them with reviewing a document in parallel. This generates diverse, structured feedback that mimics a real-world team review, surfacing potential issues from multiple viewpoints simultaneously.