Are Sparse Autoencoders Useful? A Case Study in Sparse Probing

Kavli Affiliate: Max Tegmark

| First 5 Authors: Subhash Kantamneni, Joshua Engels, Senthooran Rajamanoharan, Max Tegmark, Neel Nanda

| Summary:

Sparse autoencoders (SAEs) are a popular method for interpreting concepts
represented in large language model (LLM) activations. However, there is a lack
of evidence regarding the validity of their interpretations due to the lack of
a ground truth for the concepts used by an LLM, and a growing number of works
have presented problems with current SAEs. One alternative source of evidence
would be demonstrating that SAEs improve performance on downstream tasks beyond
existing baselines. We test this by applying SAEs to the real-world task of LLM
activation probing in four regimes: data scarcity, class imbalance, label
noise, and covariate shift. Due to the difficulty of detecting concepts in
these challenging settings, we hypothesize that SAEs’ basis of interpretable,
concept-level latents should provide a useful inductive bias. However, although
SAEs occasionally perform better than baselines on individual datasets, we are
unable to design ensemble methods combining SAEs with baselines that
consistently outperform ensemble methods solely using baselines. Additionally,
although SAEs initially appear promising for identifying spurious correlations,
detecting poor dataset quality, and training multi-token probes, we are able to
achieve similar results with simple non-SAE baselines as well. Though we cannot
discount SAEs’ utility on other tasks, our findings highlight the shortcomings
of current SAEs and the need to rigorously evaluate interpretability methods on
downstream tasks with strong baselines.

| Search Query: ArXiv Query: search_query=au:”Max Tegmark”&id_list=&start=0&max_results=3

Read More