Kavli Affiliate: Jing Wang
| First 5 Authors: Xiaopeng Li, Shasha Li, Shezheng Song, Huijun Liu, Bin Ji
| Summary:
The general capabilities of large language models (LLMs) make them the
infrastructure for various AI applications, but updating their inner knowledge
requires significant resources. Recent model editing is a promising technique
for efficiently updating a small amount of knowledge of LLMs and has attracted
much attention. In particular, local editing methods, which directly update
model parameters, are more suitable for updating a small amount of knowledge.
Local editing methods update weights by computing least squares closed-form
solutions and identify edited knowledge by vector-level matching in inference,
which achieve promising results. However, these methods still require a lot of
time and resources to complete the computation. Moreover, vector-level matching
lacks reliability, and such updates disrupt the original organization of the
model’s parameters. To address these issues, we propose an detachable and
expandable Subject Word Embedding Altering (SWEA) framework, which finds the
editing embeddings through token-level matching and adds them to the subject
word embeddings in Transformer input. To get these editing embeddings, we
propose optimizing then suppressing fusion method, which first optimizes
learnable embedding vectors for the editing target and then suppresses the
Knowledge Embedding Dimensions (KEDs) to obtain final editing embeddings. We
thus propose SWEA$oplus$OS method for editing factual knowledge in LLMs. We
demonstrate the overall state-of-the-art (SOTA) performance of SWEA$oplus$OS
on the textsc{CounterFact} and zsRE datasets. To further validate the
reasoning ability of SWEA$oplus$OS in editing knowledge, we evaluate it on the
more complex textsc{RippleEdits} benchmark. The results demonstrate that
SWEA$oplus$OS possesses SOTA reasoning ability.
| Search Query: ArXiv Query: search_query=au:”Jing Wang”&id_list=&start=0&max_results=3