We scan new podcasts and send you the top 5 insights daily.
Beyond aiding investigators, AI also empowers potential bad actors. Carson Block notes that a savvy CEO can use large language models to identify their company's vulnerabilities from a short seller's perspective, allowing them to preemptively build defenses and make it harder for activists to expose them.
AI's primary value in pre-buy research isn't just accelerating diligence on promising ideas. It's about rapidly surfacing deal-breakers—like misaligned management incentives or existential risks—allowing analysts to discard flawed theses much earlier in the process and focus their deep research time more effectively.
To evade detection by corporate security teams that analyze writing styles, a whistleblower could pass their testimony through an LLM. This obfuscates their personal "tells," like phrasing and punctuation, making attribution more difficult for internal investigators.
A CEO could embed undetectable loyalties to themselves into AI systems. If these systems are widely adopted by the government and military, the CEO could later trigger these loyalties to seize de facto control, bypassing traditional democratic and military chains of command without an overt conflict.
Despite its theoretical role as a market check, short selling is often a tool to create chaos and innuendo for profit. Activist short-sellers release reports to move markets for their own gain, which rarely uncovers true malfeasance and is an extremely difficult way to consistently make money. It's more about creating narratives than finding fraud.
Brian Armstrong uses an AI connected to all company data (Slack, G-Docs) as a C-suite coach. He asks it questions like "What should I be aware of?" or "What did I change my mind on most?" to surface hidden issues and get objective feedback, treating the AI as a mentor.
When CEOs announce large layoffs and attribute them to AI-driven efficiencies, it's often a more palatable narrative than admitting to strategic errors like over-hiring or misjudging demand. Claiming to be leveraging AI makes the leadership look forward-thinking and can boost the stock price, whereas admitting mistakes does the opposite.
Advanced AI tools can model an organization's internal investment beliefs and processes. This allows investment committees to use the AI to "red team" proposals by prompting it to generate a memo with a negative stance or to re-evaluate a deal based on a new assumption, like a net-zero mandate.
Carson Block believes the ultimate moat in activist short selling isn't just analytical skill, which AI might commoditize. The real, durable edge is a high tolerance for being sued. This personal and financial risk appetite acts as a significant barrier to entry, preventing the space from being flooded with competitors.
Hedge funds that short stocks are financially incentivized to find and publicize corporate wrongdoing early. They don't need 'proof beyond a reasonable doubt,' allowing them to flag issues like Super Micro's export violations months before the FBI could build a formal case, serving as a powerful early warning system for investors.
A significant disconnect exists between the optimistic public statements of software CEOs and their companies' legally mandated SEC filings. While executives like Figma's CEO dismiss immediate threats from AI agents, their 10-K reports increasingly list agentic AI as a material risk to their business models, revealing a cautious internal reality.