LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging

Kavli Affiliate: Ke Wang

| First 5 Authors: Ke Wang, Nikolaos Dimitriadis, Alessandro Favero, Guillermo Ortiz-Jimenez, Francois Fleuret

| Summary:

Large pre-trained models exhibit impressive zero-shot performance across
diverse tasks, but fine-tuning often leads to catastrophic forgetting, where
improvements on a target domain degrade generalization on other tasks. To
address this challenge, we introduce LiNeS, Layer-increasing Network Scaling, a
post-training editing technique designed to preserve pre-trained generalization
while enhancing fine-tuned task performance. LiNeS scales parameter updates
linearly based on their layer depth within the network, maintaining shallow
layers close to their pre-trained values to preserve general features while
allowing deeper layers to retain task-specific representations. We further
extend this approach to multi-task model merging scenarios, where layer-wise
scaling of merged parameters reduces negative task interference. LiNeS
demonstrates significant improvements in both single-task and multi-task
settings across various benchmarks in vision and natural language processing.
It mitigates forgetting, enhances out-of-distribution generalization,
integrates seamlessly with existing multi-task model merging baselines
improving their performance across benchmarks and model sizes, and can boost
generalization when merging LLM policies aligned with different rewards via
RLHF. Importantly, our method is simple to implement and complementary to many
existing techniques.

| Search Query: ArXiv Query: search_query=au:”Ke Wang”&id_list=&start=0&max_results=3

Read More