Kavli Affiliate: Zhuo Li
| First 5 Authors: Guodong Du, Xuanning Zhou, Junlin Li, Zhuo Li, Zesheng Shi
| Summary:
Cross-capability transfer is a key challenge in large language model (LLM)
research, with applications in multi-task integration, model compression, and
continual learning. Recent works like FuseLLM and FuseChat have demonstrated
the potential of transferring multiple model capabilities to lightweight
models, enhancing adaptability and efficiency, which motivates our
investigation into more efficient cross-capability transfer methods. However,
existing approaches primarily focus on small, homogeneous models, limiting
their applicability. For large, heterogeneous models, knowledge distillation
with full-parameter fine-tuning often overlooks the student model’s intrinsic
capacity and risks catastrophic forgetting, while PEFT methods struggle to
effectively absorb knowledge from source LLMs. To address these issues, we
introduce GraftLLM, a novel method that stores source model capabilities in a
target model with SkillPack format. This approach preserves general
capabilities, reduces parameter conflicts, and supports forget-free continual
learning and model fusion. We employ a module-aware adaptive compression
strategy to compress parameter updates, ensuring efficient storage while
maintaining task-specific knowledge. The resulting SkillPack serves as a
compact and transferable knowledge carrier, ideal for heterogeneous model
fusion and continual learning. Experiments across various scenarios demonstrate
that GraftLLM outperforms existing techniques in knowledge transfer, knowledge
fusion, and forget-free learning, providing a scalable and efficient solution
for cross-capability transfer. The code is publicly available at:
https://github.com/duguodong7/GraftLLM.
| Search Query: ArXiv Query: search_query=au:”Zhuo Li”&id_list=&start=0&max_results=3