Kavli Affiliate: Zhuo Li
| First 5 Authors: Yuhao Du, Zhuo Li, Pengyu Cheng, Zhihong Chen, Yuejiao Xie
| Summary:
Reinforcement Learning from Human Feedback (RLHF) is crucial for aligning
Large Language Models (LLMs) with human values. However, RLHF has been
continuously challenged by its high complexity in implementation and
computation consumption. Even with recent simplifications, such as Direct
Preference Optimization (DPO) and Advantage Leftover Lunch (A-LoL), the
problems of over-fitting and training instability remain hindering the
alignment process from the expected optimal performance. To address the
existing challenges, we propose a novel simplification of RLHF from the
perspective of variational inference, called $textbf{V}$ariational
$textbf{A}$lignment with $textbf{R}$e-weighting ($textbf{VAR}$). More
specifically, by directly minimizing the distribution gap between the learning
LLM policy and the optimal solution of RLHF, we transform the alignment
objective into a reward-driven re-weighted supervised fine-tuning (SFT) form,
which only requires minor adjustment on the SFT loss to obtain noticeable
improvement on training stability and effectiveness. On comprehensive alignment
and generation benchmarks, our VAR method has numerically achieved competitive
performance in LLM alignment helpfulness and harmlessness.
| Search Query: ArXiv Query: search_query=au:”Zhuo Li”&id_list=&start=0&max_results=3