Kavli Affiliate: Xiang Zhang
| First 5 Authors: Xiang Zhang, Taoyue Wang, Xiaotian Li, Huiyuan Yang, Lijun Yin
| Summary:
Contrastive learning has shown promising potential for learning robust
representations by utilizing unlabeled data. However, constructing effective
positive-negative pairs for contrastive learning on facial behavior datasets
remains challenging. This is because such pairs inevitably encode the
subject-ID information, and the randomly constructed pairs may push similar
facial images away due to the limited number of subjects in facial behavior
datasets. To address this issue, we propose to utilize activity descriptions,
coarse-grained information provided in some datasets, which can provide
high-level semantic information about the image sequences but is often
neglected in previous studies. More specifically, we introduce a two-stage
Contrastive Learning with Text-Embeded framework for Facial behavior
understanding (CLEF). The first stage is a weakly-supervised contrastive
learning method that learns representations from positive-negative pairs
constructed using coarse-grained activity information. The second stage aims to
train the recognition of facial expressions or facial action units by
maximizing the similarity between image and the corresponding text label names.
The proposed CLEF achieves state-of-the-art performance on three in-the-lab
datasets for AU recognition and three in-the-wild datasets for facial
expression recognition.
| Search Query: [#feed_custom_title]