Precision Knowledge Editing: Enhancing Safety in Large Language Models

Kavli Affiliate: Zhuo Li

| First 5 Authors: Xuying Li, Zhuo Li, Yuji Kosuga, Yasuhiro Yoshida, Victor Bian

| Summary:

Large language models (LLMs) have demonstrated remarkable capabilities, but
they also pose risks related to the generation of toxic or harmful content.
This work introduces Precision Knowledge Editing (PKE), an advanced technique
that builds upon existing knowledge editing methods to more effectively
identify and modify toxic parameter regions within LLMs. By leveraging neuron
weight tracking and activation pathway tracing, PKE achieves finer granularity
in toxic content management compared to previous methods like Detoxifying
Instance Neuron Modification (DINM). Our experiments demonstrate that PKE
significantly reduces the attack success rate (ASR) across various models,
including Llama2-7b and Llama-3-8b-instruct, while maintaining overall model
performance. Additionally, we also compared the performance of some
closed-source models (gpt-4-0613 and Claude 3 Sonnet) in our experiments, and
found that models adjusted using our method far outperformed the closed-source
models in terms of safety. This research contributes to the ongoing efforts to
make LLMs safer and more reliable for real-world applications.

| Search Query: ArXiv Query: search_query=au:”Zhuo Li”&id_list=&start=0&max_results=3

Read More