Kavli Affiliate: Ke Wang
| First 5 Authors: Ke Wang, Nikolaos Dimitriadis, Alessandro Favero, Guillermo Ortiz-Jimenez, Francois Fleuret
| Summary:
Fine-tuning pre-trained models has become the standard approach to endow them
with specialized knowledge, but it poses fundamental challenges. In particular,
textit{(i)} fine-tuning often leads to catastrophic forgetting, where
improvements on a target domain degrade generalization on other tasks, and
textit{(ii)} merging fine-tuned checkpoints from disparate tasks can lead to
significant performance loss. To address these challenges, we introduce LiNeS,
Layer-increasing Network Scaling, a post-training editing technique designed to
preserve pre-trained generalization while enhancing fine-tuned task
performance. LiNeS scales parameter updates linearly based on their layer depth
within the network, maintaining shallow layers close to their pre-trained
values to preserve general features while allowing deeper layers to retain
task-specific representations. In multi-task model merging scenarios,
layer-wise scaling of merged parameters reduces negative task interference.
LiNeS demonstrates significant improvements in both single-task and multi-task
settings across various benchmarks in vision and natural language processing.
It mitigates forgetting, enhances out-of-distribution generalization,
integrates seamlessly with existing multi-task model merging baselines
improving their performance across benchmarks and model sizes, and can boost
generalization when merging LLM policies aligned with different rewards via
RLHF. Our method is simple to implement, computationally efficient and
complementary to many existing techniques. Our source code is available at
https://github.com/wang-kee/LiNeS
| Search Query: ArXiv Query: search_query=au:”Ke Wang”&id_list=&start=0&max_results=3