Kavli Affiliate: Feng Wang
| First 5 Authors: Feng Wang, Tao Kong, Rufeng Zhang, Huaping Liu, Hang Li
| Summary:
We present TWIST, a simple and theoretically explainable self-supervised
representation learning method by classifying large-scale unlabeled datasets in
an end-to-end way. We employ a siamese network terminated by a softmax
operation to produce twin class distributions of two augmented images. Without
supervision, we enforce the class distributions of different augmentations to
be consistent. However, simply minimizing the divergence between augmentations
will cause collapsed solutions, i.e., outputting the same class probability
distribution for all images. In this case, no information about the input image
is left. To solve this problem, we propose to maximize the mutual information
between the input and the class predictions. Specifically, we minimize the
entropy of the distribution for each sample to make the class prediction for
each sample assertive and maximize the entropy of the mean distribution to make
the predictions of different samples diverse. In this way, TWIST can naturally
avoid the collapsed solutions without specific designs such as asymmetric
network, stop-gradient operation, or momentum encoder. As a result, TWIST
outperforms state-of-the-art methods on a wide range of tasks. Especially,
TWIST performs surprisingly well on semi-supervised learning, achieving 61.2%
top-1 accuracy with 1% ImageNet labels using a ResNet-50 as backbone,
surpassing previous best results by an absolute improvement of 6.2%. Codes and
pre-trained models are given on: https://github.com/bytedance/TWIST
| Search Query: ArXiv Query: search_query=au:”Feng Wang”&id_list=&start=0&max_results=10