Opening the AI black box: program synthesis via mechanistic interpretability

Kavli Affiliate: Max Tegmark

| First 5 Authors: Eric J. Michaud, Isaac Liao, Vedang Lad, Ziming Liu, Anish Mudide

| Summary:

We present MIPS, a novel method for program synthesis based on automated
mechanistic interpretability of neural networks trained to perform the desired
task, auto-distilling the learned algorithm into Python code. We test MIPS on a
benchmark of 62 algorithmic tasks that can be learned by an RNN and find it
highly complementary to GPT-4: MIPS solves 32 of them, including 13 that are
not solved by GPT-4 (which also solves 30). MIPS uses an integer autoencoder to
convert the RNN into a finite state machine, then applies Boolean or integer
symbolic regression to capture the learned algorithm. As opposed to large
language models, this program synthesis technique makes no use of (and is
therefore not limited by) human training data such as algorithms and code from
GitHub. We discuss opportunities and challenges for scaling up this approach to
make machine-learned models more interpretable and trustworthy.

| Search Query: ArXiv Query: search_query=au:”Max Tegmark”&id_list=&start=0&max_results=3

Read More