APLOT: Robust Reward Modeling via Adaptive Preference Learning with Optimal Transport

Kavli Affiliate: Zhuo Li

| First 5 Authors: Zhuo Li, Zhuo Li, , ,

| Summary:

The reward model (RM) plays a crucial role in aligning Large Language Models
(LLMs) with human preferences through Reinforcement Learning, where the
Bradley-Terry (BT) objective has been recognized as simple yet powerful,
specifically for pairwise preference learning. However, BT-based RMs often
struggle to effectively distinguish between similar preference responses,
leading to insufficient separation between preferred and non-preferred outputs.
Consequently, they may easily overfit easy samples and cannot generalize well
to Out-Of-Distribution (OOD) samples, resulting in suboptimal performance. To
address these challenges, this paper introduces an effective enhancement to
BT-based RMs through an adaptive margin mechanism. Specifically, we design to
dynamically adjust the RM focus on more challenging samples through margins,
based on both semantic similarity and model-predicted reward differences, which
is approached from a distributional perspective solvable with Optimal Transport
(OT). By incorporating these factors into a principled OT cost matrix design,
our adaptive margin enables the RM to better capture distributional differences
between chosen and rejected responses, yielding significant improvements in
performance, convergence speed, and generalization capabilities. Experimental
results across multiple benchmarks demonstrate that our method outperforms
several existing RM techniques, showcasing enhanced performance in both
In-Distribution (ID) and OOD settings. Moreover, RLHF experiments support our
practical effectiveness in better aligning LLMs with human preferences. Our
code is available at https://github.com/BIRlz/APLOT

| Search Query: ArXiv Query: search_query=au:”Zhuo Li”&id_list=&start=0&max_results=3

Read More