Towards Green AI in Fine-tuning Large Language Models via Adaptive Backpropagation

Kavli Affiliate: Wei Gao

| First 5 Authors: Kai Huang, Hanyun Yin, Heng Huang, Wei Gao,

| Summary:

Fine-tuning is the most effective way of adapting pre-trained large language
models (LLMs) to downstream applications. With the fast growth of LLM-enabled
AI applications and democratization of open-souced LLMs, fine-tuning has become
possible for non-expert individuals, but intensively performed LLM fine-tuning
worldwide could result in significantly high energy consumption and carbon
footprint, which may bring large environmental impact. Mitigating such
environmental impact towards Green AI directly correlates to reducing the FLOPs
of fine-tuning, but existing techniques on efficient LLM fine-tuning can only
achieve limited reduction of such FLOPs, due to their ignorance of the
backpropagation cost in fine-tuning. To address this limitation, in this paper
we present GreenTrainer, a new LLM fine-tuning technique that adaptively
evaluates different tensors’ backpropagation costs and contributions to the
fine-tuned model accuracy, to minimize the fine-tuning cost by selecting the
most appropriate set of tensors in training. Such selection in GreenTrainer is
made based on a given objective of FLOPs reduction, which can flexibly adapt to
the carbon footprint in energy supply and the need in Green AI. Experiment
results over multiple open-sourced LLM models and abstractive summarization
datasets show that, compared to fine-tuning the whole LLM model, GreenTrainer
can save up to 64% FLOPs in fine-tuning without any noticeable model accuracy
loss. Compared to the existing fine-tuning techniques such as LoRa,
GreenTrainer can achieve up to 4% improvement on model accuracy with on-par
FLOPs reduction.

| Search Query: ArXiv Query: search_query=au:”Wei Gao”&id_list=&start=0&max_results=3

Read More