Multimodal Channel-Mixing: Channel and Spatial Masked AutoEncoder on Facial Action Unit Detection

Kavli Affiliate: Xiang Zhang

| First 5 Authors: Xiang Zhang, Huiyuan Yang, Taoyue Wang, Xiaotian Li, Lijun Yin

| Summary:

Recent studies have focused on utilizing multi-modal data to develop robust
models for facial Action Unit (AU) detection. However, the heterogeneity of
multi-modal data poses challenges in learning effective representations. One
such challenge is extracting relevant features from multiple modalities using a
single feature extractor. Moreover, previous studies have not fully explored
the potential of multi-modal fusion strategies. In contrast to the extensive
work on late fusion, there are limited investigations on early fusion for
channel information exploration. This paper presents a novel multi-modal
reconstruction network, named Multimodal Channel-Mixing (MCM), as a pre-trained
model to learn robust representation for facilitating multi-modal fusion. The
approach follows an early fusion setup, integrating a Channel-Mixing module,
where two out of five channels are randomly dropped. The dropped channels then
are reconstructed from the remaining channels using masked autoencoder. This
module not only reduces channel redundancy, but also facilitates multi-modal
learning and reconstruction capabilities, resulting in robust feature learning.
The encoder is fine-tuned on a downstream task of automatic facial action unit
detection. Pre-training experiments were conducted on BP4D+, followed by
fine-tuning on BP4D and DISFA to assess the effectiveness and robustness of the
proposed framework. The results demonstrate that our method meets and surpasses
the performance of state-of-the-art baseline methods.

| Search Query: ArXiv Query: search_query=au:”Xiang Zhang”&id_list=&start=0&max_results=3

Read More