SeqPO-SiMT: Sequential Policy Optimization for Simultaneous Machine Translation

Kavli Affiliate: Ting Xu

| First 5 Authors: Ting Xu, Zhichao Huang, Jiankai Sun, Shanbo Cheng, Wai Lam

| Summary:

We present Sequential Policy Optimization for Simultaneous Machine
Translation (SeqPO-SiMT), a new policy optimization framework that defines the
simultaneous machine translation (SiMT) task as a sequential decision making
problem, incorporating a tailored reward to enhance translation quality while
reducing latency. In contrast to popular Reinforcement Learning from Human
Feedback (RLHF) methods, such as PPO and DPO, which are typically applied in
single-step tasks, SeqPO-SiMT effectively tackles the multi-step SiMT task.
This intuitive framework allows the SiMT LLMs to simulate and refine the SiMT
process using a tailored reward. We conduct experiments on six datasets from
diverse domains for En to Zh and Zh to En SiMT tasks, demonstrating that
SeqPO-SiMT consistently achieves significantly higher translation quality with
lower latency. In particular, SeqPO-SiMT outperforms the supervised fine-tuning
(SFT) model by 1.13 points in COMET, while reducing the Average Lagging by 6.17
in the NEWSTEST2021 En to Zh dataset. While SiMT operates with far less context
than offline translation, the SiMT results of SeqPO-SiMT on 7B LLM surprisingly
rival the offline translation of high-performing LLMs, including
Qwen-2.5-7B-Instruct and LLaMA-3-8B-Instruct.

| Search Query: ArXiv Query: search_query=au:”Ting Xu”&id_list=&start=0&max_results=3

Read More