Kavli Affiliate: Ran Wang | First 5 Authors: Yufei Ma, Zihan Liang, Huangyu Dai, Ben Chen, Dehong Gao | Summary: The growing demand for larger-scale models in the development of textbf{L}arge textbf{L}anguage textbf{M}odels (LLMs) poses challenges for efficient training within limited computational resources. Traditional fine-tuning methods often exhibit instability in multi-task learning and rely heavily […]
Continue.. MoDULA: Mixture of Domain-Specific and Universal LoRA for Multi-Task Learning