Towards Explainable Harmful Meme Detection through Multimodal Debate between Large Language Models

Kavli Affiliate: Wei Gao

| First 5 Authors: Hongzhan Lin, Ziyang Luo, Wei Gao, Jing Ma, Bo Wang

| Summary:

The age of social media is flooded with Internet memes, necessitating a clear
grasp and effective identification of harmful ones. This task presents a
significant challenge due to the implicit meaning embedded in memes, which is
not explicitly conveyed through the surface text and image. However, existing
harmful meme detection methods do not present readable explanations that unveil
such implicit meaning to support their detection decisions. In this paper, we
propose an explainable approach to detect harmful memes, achieved through
reasoning over conflicting rationales from both harmless and harmful positions.
Specifically, inspired by the powerful capacity of Large Language Models (LLMs)
on text generation and reasoning, we first elicit multimodal debate between
LLMs to generate the explanations derived from the contradictory arguments.
Then we propose to fine-tune a small language model as the debate judge for
harmfulness inference, to facilitate multimodal fusion between the harmfulness
rationales and the intrinsic multimodal information within memes. In this way,
our model is empowered to perform dialectical reasoning over intricate and
implicit harm-indicative patterns, utilizing multimodal explanations
originating from both harmless and harmful arguments. Extensive experiments on
three public meme datasets demonstrate that our harmful meme detection approach
achieves much better performance than state-of-the-art methods and exhibits a
superior capacity for explaining the meme harmfulness of the model predictions.

| Search Query: ArXiv Query: search_query=au:”Wei Gao”&id_list=&start=0&max_results=3

Read More