MEMOIR: Lifelong Model Editing with Minimal Overwrite and Informed Retention for LLMs

Kavli Affiliate: Ke Wang

| First 5 Authors: Ke Wang, Ke Wang, , ,

| Summary:

Language models deployed in real-world systems often require post-hoc updates
to incorporate new or corrected knowledge. However, editing such models
efficiently and reliably-without retraining or forgetting previous
information-remains a major challenge. Existing methods for lifelong model
editing either compromise generalization, interfere with past edits, or fail to
scale to long editing sequences. We propose MEMOIR, a novel scalable framework
that injects knowledge through a residual memory, i.e., a dedicated parameter
module, while preserving the core capabilities of the pre-trained model. By
sparsifying input activations through sample-dependent masks, MEMOIR confines
each edit to a distinct subset of the memory parameters, minimizing
interference among edits. At inference, it identifies relevant edits by
comparing the sparse activation patterns of new queries to those stored during
editing. This enables generalization to rephrased queries by activating only
the relevant knowledge while suppressing unnecessary memory activation for
unrelated prompts. Experiments on question answering, hallucination correction,
and out-of-distribution generalization benchmarks for LLaMA-3 and Mistral
backbones demonstrate that MEMOIR achieves state-of-the-art performance across
reliability, generalization, and locality metrics, scaling to thousands of
sequential edits with minimal forgetting.

| Search Query: ArXiv Query: search_query=au:”Ke Wang”&id_list=&start=0&max_results=3

Read More