Not All Language Model Features Are Linear

Kavli Affiliate: Max Tegmark

| First 5 Authors: Joshua Engels, Eric J. Michaud, Isaac Liao, Wes Gurnee, Max Tegmark

| Summary:

Recent work has proposed that language models perform computation by
manipulating one-dimensional representations of concepts ("features") in
activation space. In contrast, we explore whether some language model
representations may be inherently multi-dimensional. We begin by developing a
rigorous definition of irreducible multi-dimensional features based on whether
they can be decomposed into either independent or non-co-occurring
lower-dimensional features. Motivated by these definitions, we design a
scalable method that uses sparse autoencoders to automatically find
multi-dimensional features in GPT-2 and Mistral 7B. These auto-discovered
features include strikingly interpretable examples, e.g. circular features
representing days of the week and months of the year. We identify tasks where
these exact circles are used to solve computational problems involving modular
arithmetic in days of the week and months of the year. Next, we provide
evidence that these circular features are indeed the fundamental unit of
computation in these tasks with intervention experiments on Mistral 7B and
Llama 3 8B. Finally, we find further circular representations by breaking down
the hidden states for these tasks into interpretable components, and we examine
the continuity of the days of the week feature in Mistral 7B.

| Search Query: ArXiv Query: search_query=au:”Max Tegmark”&id_list=&start=0&max_results=3

Read More