Multi2: Multi-Agent Test-Time Scalable Framework for Multi-Document Processing

Kavli Affiliate: Xiang Zhang

| First 5 Authors: Juntai Cao, Xiang Zhang, Raymond Li, Chuyuan Li, Shafiq Joty

| Summary:

Recent advances in test-time scaling have shown promising results in
improving Large Language Models (LLMs) performance through strategic
computation allocation during inference. While this approach has demonstrated
strong performance improvements in logical and mathematical reasoning tasks,
its application to natural language generation (NLG), especially summarization,
has yet to be explored. Multi-Document Summarization (MDS) is a challenging
task that focuses on extracting and synthesizing useful information from
multiple lengthy documents. Unlike reasoning tasks, MDS requires a more nuanced
approach to prompt design and ensemble, as there is no "best" prompt to satisfy
diverse summarization requirements. To address this, we propose a novel
framework that leverages inference-time scaling for this task. Precisely, we
take prompt ensemble approach by leveraging various prompt to first generate
candidate summaries and then ensemble them with an aggregator to produce a
refined summary. We also introduce two new evaluation metrics:
Consistency-Aware Preference (CAP) score and LLM Atom-Content-Unit (ACU) score,
to enhance LLM’s contextual understanding while mitigating its positional bias.
Extensive experiments demonstrate the effectiveness of our approach in
improving summary quality while identifying and analyzing the scaling
boundaries in summarization tasks.

| Search Query: ArXiv Query: search_query=au:”Xiang Zhang”&id_list=&start=0&max_results=3

Read More