Goodfire AI found that for certain tasks, simple classifiers trained on a model's raw activations performed better than those using features from Sparse Autoencoders (SAEs). This surprising result challenges the assumption that SAEs always provide a cleaner concept space.

Related Insights

A helpful mental model distinguishes parameter-space edits from activation-space edits. Fine-tuning with LoRA alters model weights (the "pipes"), while activation steering modifies the information flowing through them (the "water"), clarifying two distinct approaches to model control.

Goodfire frames interpretability as the core of the AI-human interface. One direction is intentional design, allowing human control. The other, especially with superhuman scientific models, is extracting novel knowledge (e.g., new Alzheimer's biomarkers) that the AI discovers.

The ambition to fully reverse-engineer AI models into simple, understandable components is proving unrealistic as their internal workings are messy and complex. Its practical value is less about achieving guarantees and more about coarse-grained analysis, such as identifying when specific high-level capabilities are being used.

Just as biology deciphers the complex systems created by evolution, mechanistic interpretability seeks to understand the "how" inside neural networks. Instead of treating models as black boxes, it examines their internal parameters and activations to reverse-engineer how they work, moving beyond just measuring their external behavior.

As AI models are used for critical decisions in finance and law, black-box empirical testing will become insufficient. Mechanistic interpretability, which analyzes model weights to understand reasoning, is a bet that society and regulators will require explainable AI, making it a crucial future technology.

This advanced safety method moves beyond black-box filtering by analyzing a model's internal activations at runtime. It identifies which sub-components are associated with undesirable outputs, allowing for intervention or modification of the model's behavior *during* the generation process, rather than just after the fact.

In partnership with institutions like Mayo Clinic, Goodfire applied interpretability tools to specialized foundation models. This process successfully identified new, previously unknown biomarkers for Alzheimer's, showcasing how understanding a model's internals can lead to tangible scientific breakthroughs.

For AI systems to be adopted in scientific labs, they must be interpretable. Researchers need to understand the 'why' behind an AI's experimental plan to validate and trust the process, making interpretability a more critical feature than raw predictive power.

Instead of pure academic exploration, Goodfire tests state-of-the-art interpretability techniques on customer problems. The shortcomings and failures they encounter directly inform their fundamental research priorities, ensuring their work remains commercially relevant.

Goodfire AI defines interpretability broadly, focusing on applying research to high-stakes production scenarios like healthcare. This strategy aims to bridge the gap between theoretical understanding and the practical, real-world application of AI models.