Kavli Affiliate: Cheng Peng
| First 5 Authors: Shuyao Xu, Cheng Peng, Jiangxuan Long, Weidi Xu, Wei Chu
| Summary:
Recent advances in model distillation demonstrate that data from advanced
reasoning models (e.g., DeepSeek-R1, OpenAI’s o1) can effectively transfer
complex reasoning abilities to smaller, efficient student models. However,
standard practices employ rejection sampling, discarding incorrect reasoning
examples — valuable, yet often underutilized data. This paper addresses the
critical question: How can both positive and negative distilled reasoning
traces be effectively leveraged to maximize LLM reasoning performance in an
offline setting? To this end, We propose Reinforcement Distillation (REDI), a
two-stage framework. Stage 1 learns from positive traces via Supervised
Fine-Tuning (SFT). Stage 2 further refines the model using both positive and
negative traces through our proposed REDI objective. This novel objective is a
simple, reference-free loss function that outperforms established methods like
DPO and SimPO in this distillation context. Our empirical evaluations
demonstrate REDI’s superiority over baseline Rejection Sampling SFT or SFT
combined with DPO/SimPO on mathematical reasoning tasks. Notably, the
Qwen-REDI-1.5B model, post-trained on just 131k positive and negative examples
from the open Open-R1 dataset, achieves an 83.1% score on MATH-500 (pass@1).
Its performance matches or surpasses that of DeepSeek-R1-Distill-Qwen-1.5B (a
model post-trained on 800k proprietary data) across various mathematical
reasoning benchmarks, establishing a new state-of-the-art for 1.5B models
post-trained offline with openly available data.
| Search Query: ArXiv Query: search_query=au:”Cheng Peng”&id_list=&start=0&max_results=3