Kavli Affiliate: Jia Liu | First 5 Authors: Shuliang Liu, Shuliang Liu, , , | Summary: The widespread deployment of large language models (LLMs) across critical domains has amplified the societal risks posed by algorithmically generated misinformation. Unlike traditional false content, LLM-generated misinformation can be self-reinforcing, highly plausible, and capable of rapid propagation across multiple […]
Continue.. A Survey on Proactive Defense Strategies Against Misinformation in Large Language Models