Kavli Affiliate: Cheng Peng
| First 5 Authors: Cheng Peng, Ke Chen, Lidan Shou, Gang Chen,
| Summary:
Multi-modal multi-label emotion recognition (MMER) aims to identify relevant
emotions from multiple modalities. The challenge of MMER is how to effectively
capture discriminative features for multiple labels from heterogeneous data.
Recent studies are mainly devoted to exploring various fusion strategies to
integrate multi-modal information into a unified representation for all labels.
However, such a learning scheme not only overlooks the specificity of each
modality but also fails to capture individual discriminative features for
different labels. Moreover, dependencies of labels and modalities cannot be
effectively modeled. To address these issues, this paper presents ContrAstive
feature Reconstruction and AggregaTion (CARAT) for the MMER task. Specifically,
we devise a reconstruction-based fusion mechanism to better model fine-grained
modality-to-label dependencies by contrastively learning modal-separated and
label-specific features. To further exploit the modality complementarity, we
introduce a shuffle-based aggregation strategy to enrich co-occurrence
collaboration among labels. Experiments on two benchmark datasets CMU-MOSEI and
M3ED demonstrate the effectiveness of CARAT over state-of-the-art methods. Code
is available at https://github.com/chengzju/CARAT.
| Search Query: ArXiv Query: search_query=au:”Cheng Peng”&id_list=&start=0&max_results=3