FaceMAE: Privacy-Preserving Face Recognition via Masked Autoencoders

Kavli Affiliate: Zheng Zhu

| First 5 Authors: Kai Wang, Bo Zhao, Xiangyu Peng, Zheng Zhu, Jiankang Deng

| Summary:

Face recognition, as one of the most successful applications in artificial
intelligence, has been widely used in security, administration, advertising,
and healthcare. However, the privacy issues of public face datasets have
attracted increasing attention in recent years. Previous works simply mask most
areas of faces or synthesize samples using generative models to construct
privacy-preserving face datasets, which overlooks the trade-off between privacy
protection and data utility. In this paper, we propose a novel framework
FaceMAE, where the face privacy and recognition performance are considered
simultaneously. Firstly, randomly masked face images are used to train the
reconstruction module in FaceMAE. We tailor the instance relation matching
(IRM) module to minimize the distribution gap between real faces and FaceMAE
reconstructed ones. During the deployment phase, we use trained FaceMAE to
reconstruct images from masked faces of unseen identities without extra
training. The risk of privacy leakage is measured based on face retrieval
between reconstructed and original datasets. Experiments prove that the
identities of reconstructed images are difficult to be retrieved. We also
perform sufficient privacy-preserving face recognition on several public face
datasets (i.e. CASIA-WebFace and WebFace260M). Compared to previous state of
the arts, FaceMAE consistently textbf{reduces at least 50% error rate} on
LFW, CFP-FP and AgeDB.

| Search Query: ArXiv Query: search_query=au:”Zheng Zhu”&id_list=&start=0&max_results=10

Read More