Reinforcement Tuning for Detecting Stances and Debunking Rumors Jointly with Large Language Models

Kavli Affiliate: Wei Gao

| First 5 Authors: Ruichao Yang, Wei Gao, Jing Ma, Hongzhan Lin, Bo Wang

| Summary:

Learning multi-task models for jointly detecting stance and verifying rumors
poses challenges due to the need for training data of stance at post level and
rumor veracity at claim level, which are difficult to obtain. To address this
issue, we leverage large language models (LLMs) as the foundation annotators
for the joint stance detection (SD) and rumor verification (RV) tasks, dubbed
as JSDRV. We introduce a novel reinforcement tuning framework to enhance the
joint predictive capabilities of LLM-based SD and RV components. Specifically,
we devise a policy for selecting LLM-annotated data at the two levels,
employing a hybrid reward mechanism to choose high-quality labels for effective
LLM fine-tuning on both tasks. Results demonstrate that JSDRV improves the
capabilities of LLMs in the joint tasks, not only outperforming
state-of-the-art methods but also generalizing to non-LLMs accommodated as task
models.

| Search Query: ArXiv Query: search_query=au:”Wei Gao”&id_list=&start=0&max_results=3

Read More