Kavli Affiliate: Zhuo Li
| First 5 Authors: Zhisheng Lin, Han Fu, Chenghao Liu, Zhuo Li, Jianling Sun
| Summary:
Parameter-efficient fine-tuning (PEFT) has emerged as an effective method for
adapting pre-trained language models to various tasks efficiently. Recently,
there has been a growing interest in transferring knowledge from one or
multiple tasks to the downstream target task to achieve performance
improvements. However, current approaches typically either train adapters on
individual tasks or distill shared knowledge from source tasks, failing to
fully exploit task-specific knowledge and the correlation between source and
target tasks. To overcome these limitations, we propose PEMT, a novel
parameter-efficient fine-tuning framework based on multi-task transfer
learning. PEMT extends the mixture-of-experts (MoE) framework to capture the
transferable knowledge as a weighted combination of adapters trained on source
tasks. These weights are determined by a gated unit, measuring the correlation
between the target and each source task using task description prompt vectors.
To fully exploit the task-specific knowledge, we also propose the Task Sparsity
Loss to improve the sparsity of the gated unit. We conduct experiments on a
broad range of tasks over 17 datasets. The experimental results demonstrate our
PEMT yields stable improvements over full fine-tuning, and state-of-the-art
PEFT and knowledge transferring methods on various tasks. The results highlight
the effectiveness of our method which is capable of sufficiently exploiting the
knowledge and correlation features across multiple tasks.
| Search Query: ArXiv Query: search_query=au:”Zhuo Li”&id_list=&start=0&max_results=3