Kavli Affiliate: Yi Zhou | First 5 Authors: Shuli Jiang, Swanand Ravindra Kadhe, Yi Zhou, Ling Cai, Nathalie Baracaldo | Summary: Growing applications of large language models (LLMs) trained by a third party raise serious concerns on the security vulnerability of LLMs.It has been demonstrated that malicious actors can covertly exploit these vulnerabilities in LLMs […]
Continue.. Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning Attacks