Kavli Affiliate: Wei Gao
| Summary:
Chain-of-Thought (CoT) prompting has improved LLM reasoning, but models often generate explanations that appear coherent while containing unfaithful intermediate steps. Existing self-evaluation approaches are prone to inherent biases: the model may confidently endorse coherence even when the step-to-step implication is not valid, leading to unreliable faithfulness evaluation. We propose FACT-E, a causality-inspired framework for evaluating CoT quality. FACT-E uses controlled perturbations as an instrumental signal to separate genuine step-to-step dependence from bias-driven artifacts, producing more reliable faithfulness estimates (textitintra-chain faithfulness). To select trustworthy trajectories, FACT-E jointly considers textitintra-chain faithfulness and textitCoT-to-answer consistency, ensuring that selected chains are both faithful internally and supportive of the correct final answer. Experiments on GSM8K, MATH, and CommonsenseQA show that FACT-E improves reasoning-trajectory selection and yields stronger in-context learning exemplars. FACT-E also reliably detects flawed reasoning under noisy conditions, providing a robust metric for trustworthy LLM reasoning.
| Search Query:arXiv Query: search_query=au:”Gao Wei”&id_list=&start=0&max_results=10
Read More
RECENT NON-PEER REVIEWED REPORTS FROM KAVLI INSTITUTE FACULTY AND AFFILIATES