Untangling Neural Network Mechanisms: Goodfire's Lee Sharkey on Parameter-based Interpretability

Today Lee Sharkey of Goodfire joins The Cognitive Revolution to discuss his research on parameter decomposition methods that break down neural networks into interpretable computational components, exploring how his team's "stochastic parameter decomposition" approach addresses the limitations of sparse autoencoders and offers new pathways for understanding, monitoring, and potentially steering AI systems at the mechanistic level.
Watch Episode Here
Read Episode Description
Today Lee Sharkey of Goodfire joins The Cognitive Revolution to discuss his research on parameter decomposition methods that break down neural networks into interpretable computational components, exploring how his team's "stochastic parameter decomposition" approach addresses the limitations of sparse autoencoders and offers new pathways for understanding, monitoring, and potentially steering AI systems at the mechanistic level.
Check out our sponsors: Oracle Cloud Infrastructure, Shopify.
Shownotes below brought to you by Notion AI Meeting Notes - try one month for free at: https://notion.com/lp/nathan
- Parameter vs. Activation Decomposition: Traditional interpretability methods like Sparse Autoencoders (SAEs) focus on analyzing activations, while parameter decomposition focuses on understanding the parameters themselves - the actual "algorithm" of the neural network.
- No "True" Decomposition: None of the decompositions (whether sparse dictionary learning or parameter decomposition) are objectively "right" because they're all attempting to discretize a fundamentally continuous object, inevitably introducing approximations.
- Tradeoff in Interpretability: There's a balance between reconstruction loss and causal importance - as you decompose networks more, reconstruction loss may worsen, but interpretability might improve up to a certain point.
- Potential Unlearning Applications: Parameter decomposition may make unlearning more straightforward than with SAEs because researchers are already working in parameter space and can directly modify vectors that perform specific functions.
- Function Detection vs. Input Direction: A function like "deception" might manifest in many different input directions that SAEs struggle to identify as a single concept, while parameter decomposition might better isolate such functionality.
- Knowledge Extraction Goal: A key aim is to extract knowledge from models by understanding how they "think," especially for tasks where models demonstrate superhuman capabilities.
Sponsors:
Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive
Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive
PRODUCED BY:
https://aipodcast.ing
CHAPTERS:
(00:00) About the Episode
(06:07) Introduction and Background
(10:09) Parameter Decomposition Basics (Part 1)
(21:29) Sponsor: Oracle Cloud Infrastructure
(22:38) Parameter Decomposition Basics (Part 2)
(34:23) Computational Challenges Explored (Part 1)
(36:16) Sponsor: Shopify
(38:12) Computational Challenges Explored (Part 2)
(49:39) Loss Functions Optimization
(01:03:27) Method Limitations Discussed
(01:09:11) Stochastic Parameter Decomposition
(01:30:46) Causal Importance Approach
(01:44:15) Feature Splitting Solutions
(01:55:25) Future Applications Scaling
(02:00:36) Outro
SOCIAL LINKS:
Website: https://www.cognitiverevolutio...
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://linkedin.com/in/nathan...
Youtube: https://youtube.com/@Cognitive...
Apple: https://podcasts.apple.com/de/...
Spotify: https://open.spotify.com/show/...