Rethinking Feature Distribution for Loss Functions in Image Classification

Kavli Affiliate: Jiansheng Chen

| First 5 Authors: Weitao Wan, Yuanyi Zhong, Tianpeng Li, Jiansheng Chen,

| Summary:

We propose a large-margin Gaussian Mixture (L-GM) loss for deep neural
networks in classification tasks. Different from the softmax cross-entropy
loss, our proposal is established on the assumption that the deep features of
the training set follow a Gaussian Mixture distribution. By involving a
classification margin and a likelihood regularization, the L-GM loss
facilitates both a high classification performance and an accurate modeling of
the training feature distribution. As such, the L-GM loss is superior to the
softmax loss and its major variants in the sense that besides classification,
it can be readily used to distinguish abnormal inputs, such as the adversarial
examples, based on their features’ likelihood to the training feature
distribution. Extensive experiments on various recognition benchmarks like
MNIST, CIFAR, ImageNet and LFW, as well as on adversarial examples demonstrate
the effectiveness of our proposal.

| Search Query: ArXiv Query: search_query=au:”Jiansheng Chen”&id_list=&start=0&max_results=10

Read More

Leave a Reply

Your email address will not be published.