Agentar-DeepFinance-300K: A Large-Scale Financial Dataset via Systematic Chain-of-Thought Synthesis Optimization

Kavli Affiliate: Lihong Wang

| First 5 Authors: Xiaoke Zhao, Xiaoke Zhao, , ,

| Summary:

Recent advancements in large language models (LLMs) have demonstrated
remarkable general reasoning capabilities, holding significant potential for
applications in the financial domain, a field that requires robust and reliable
reasoning. It has been demonstrated that distilling high-quality
chain-of-thought (CoT) rationales from advanced general reasoning models offers
a promising and efficient path to the financial reasoning model. However,
existing CoT synthesis methods suffer from shallow CoT sampling, leaving the
question of how to construct a well-designed knowledge space for finance
reasoning unexplored. In this paper, we present
textbfAgentar-DeepFinance-300K , a large-scale financial reasoning dataset
characterized by its systematic CoT synthesis optimization. We first introduce
a comprehensive CoT synthesis pipeline featuring Multi-perspective Knowledge
Extraction (MKE) and Self-Corrective Rewriting (SCR) to generate exhaustive and
deep financial reasoning trajectories. Furthermore, a systematic investigation,
termed CoT Cube, is conducted to analyze critical factors that influence CoT
effectiveness, such as necessity, length and synthesizer, yielding valuable
insights for high-quality financial CoT construction. Experiments demonstrate
that models trained on our Agentar-DeepFinance-300K achieve significant
improvements on financial benchmarks. We publicly release
Agentar-DeepFinance-300K , hoping to advance the research in financial
reasoning models.

| Search Query: ArXiv Query: search_query=au:”Lihong Wang”&id_list=&start=0&max_results=3

Read More