Kavli Affiliate: Cheng Peng | First 5 Authors: Cheng Peng Huang, Hao-Yuan Chen, , , | Summary: Large language models (LLMs) demonstrate strong capabilities in natural language processing but remain prone to hallucinations, generating factually incorrect or fabricated content. This issue undermines their reliability, particularly in high-stakes domains such as healthcare and legal advisory. To […]
Continue.. Delta — Contrastive Decoding Mitigates Text Hallucinations in Large Language Models