To navigate millions of documents, journalists trained a large language model to analyze and score Jeffrey Epstein's emails based on how disturbing they would be to an average reader. This AI-driven approach filtered the massive dataset down to 1,500 highly relevant email threads, showcasing a new method for investigative journalism.

Related Insights

To evade detection by corporate security teams that analyze writing styles, a whistleblower could pass their testimony through an LLM. This obfuscates their personal "tells," like phrasing and punctuation, making attribution more difficult for internal investigators.

The documents suggest that for the elite circles surrounding Epstein, blackmail was not a rare, sinister act but a commonplace, almost casual, mechanism for gaining leverage and maintaining influence over powerful individuals.

Instead of just grouping similar news stories, Kevin Rose created an AI-powered "Gravity Engine." This system scores content clusters on qualitative dimensions like "Industry Impact," "Novelty," and "Builder Relevance," providing a sophisticated editorial layer to surface what truly matters.

Journalist Casey Newton uses AI tools not to write his columns, but to fact-check them after they're written. He finds that feeding his completed text into an LLM is a surprisingly effective way to catch factual errors, a significant improvement in model capability over the past year.

By feeding years of iMessage data to Claude Code, a user demonstrated that AI can extract deep relational insights. The model identified emotional openness, changes in conversational topics over time, and even subtle grammatical patterns, effectively creating a 'relational intelligence' profile from unstructured text.

Data analysis of Jeffrey Epstein's emails reveals his network was not confined to his financial background. It was exceptionally broad, including elites from science, technology, and law. A quarter of his non-staff contacts had their own Wikipedia pages, indicating a strategic cultivation of influence across various power centers.

California's CalMatters uses an AI called 'Tip Sheet' to analyze public records of politicians, including speeches, votes, and campaign contributions. The AI flags anomalies and potential stories, which it then provides exclusively to human journalists to investigate, creating a powerful human-AI partnership.

The mass release of Epstein documents, without a trusted institution to filter them, creates a justice problem. Trivial details (like being on an invite list) are over-punished through public shaming, while truly criminal behavior gets lost in the noise, leading to a "mushed together" outcome.

To codify a specific person's "taste" in writing, the team fed the DSPy framework a dataset of tweets with thumbs up/down ratings and explanations. DSPy then optimized a prompt that created an AI "judge" capable of evaluating new content with 76.5% accuracy against that person's preferences.

Hunt reveals their initial, hand-built models were like a small net that missed most signals. The probabilistic approach of modern LLMs allowed them to build a vastly more effective system, exceeding their 5-6x improvement estimate by orders of magnitude.