Seeing is Believing: Brain-Inspired Modular Training for Mechanistic Interpretability

Kavli Affiliate: Max Tegmark

| First 5 Authors: Ziming Liu, Eric Gan, Max Tegmark, ,

| Summary:

We introduce Brain-Inspired Modular Training (BIMT), a method for making
neural networks more modular and interpretable. Inspired by brains, BIMT embeds
neurons in a geometric space and augments the loss function with a cost
proportional to the length of each neuron connection. We demonstrate that BIMT
discovers useful modular neural networks for many simple tasks, revealing
compositional structures in symbolic formulas, interpretable decision
boundaries and features for classification, and mathematical structure in
algorithmic datasets. The ability to directly see modules with the naked eye
can complement current mechanistic interpretability strategies such as probes,
interventions or staring at all weights.

| Search Query: ArXiv Query: search_query=au:”Max Tegmark”&id_list=&start=0&max_results=3

Read More