Low-Rank Adapting Models for Sparse Autoencoders

Kavli Affiliate: Max Tegmark

| First 5 Authors: Matthew Chen, Joshua Engels, Max Tegmark, ,

| Summary:

Sparse autoencoders (SAEs) decompose language model representations into a
sparse set of linear latent vectors. Recent works have improved SAEs using
language model gradients, but these techniques require many expensive backward
passes during training and still cause a significant increase in cross entropy
loss when SAE reconstructions are inserted into the model. In this work, we
improve on these limitations by taking a fundamentally different approach: we
use low-rank adaptation (LoRA) to finetune the textit{language model itself}
around a previously trained SAE. We analyze our method across SAE sparsity, SAE
width, language model size, LoRA rank, and model layer on the Gemma Scope
family of SAEs. In these settings, our method reduces the cross entropy loss
gap by 30% to 55% when SAEs are inserted during the forward pass. We also
find that compared to end-to-end (e2e) SAEs, our approach achieves the same
downstream cross entropy loss 3$times$ to 20$times$ faster on gemma and
2$times$ to 10$times$ faster on llama. We further show that our technique
improves downstream metrics and can adapt multiple SAEs at once without harming
general language model capabilities. Our results demonstrate that improving
model interpretability is not limited to post-hoc SAE training; Pareto
improvements can also be achieved by directly optimizing the model itself.

| Search Query: ArXiv Query: search_query=au:”Max Tegmark”&id_list=&start=0&max_results=3

Read More