Shaping Deep Feature Space towards Gaussian Mixture for Visual Classification

Kavli Affiliate: Jiansheng Chen

| First 5 Authors: Weitao Wan, Jiansheng Chen, Cheng Yu, Tong Wu, Yuanyi Zhong

| Summary:

The softmax cross-entropy loss function has been widely used to train deep
models for various tasks. In this work, we propose a Gaussian mixture (GM) loss
function for deep neural networks for visual classification. Unlike the softmax
cross-entropy loss, our method explicitly shapes the deep feature space towards
a Gaussian Mixture distribution. With a classification margin and a likelihood
regularization, the GM loss facilitates both high classification performance
and accurate modeling of the feature distribution. The GM loss can be readily
used to distinguish abnormal inputs, such as the adversarial examples, based on
the discrepancy between feature distributions of the inputs and the training
set. Furthermore, theoretical analysis shows that a symmetric feature space can
be achieved by using the GM loss, which enables the models to perform robustly
against adversarial attacks. The proposed model can be implemented easily and
efficiently without using extra trainable parameters. Extensive evaluations
demonstrate that the proposed method performs favorably not only on image
classification but also on robust detection of adversarial examples generated
by strong attacks under different threat models.

| Search Query: ArXiv Query: search_query=au:”Jiansheng Chen”&id_list=&start=0&max_results=10

Read More

Leave a Reply

Your email address will not be published.