Multi-scale Knowledge Distillation for Unsupervised Person Re-Identification

Kavli Affiliate: Xiang Zhang

| First 5 Authors: Long Lan, Xiao Teng, Haoang Chi, Xiang Zhang,

| Summary:

Unsupervised person re-identification is a challenging and promising task in
the computer vision. Nowadays unsupervised person re-identification methods
have achieved great improvements by training with pseudo labels. However, the
appearance and label noise are less explicitly studied in the unsupervised
manner. To relieve the effects of appearance noise the global features
involved, we also take into account the features from two local views and
produce multi-scale features. We explore the knowledge distillation to filter
label noise, Specifically, we first train a teacher model from noisy pseudo
labels in a iterative way, and then use the teacher model to guide the learning
of our student model. In our setting, the student model could converge fast in
the supervision of the teacher model thus reduce the interference of noisy
labels as the teacher model greatly suffered. After carefully handling the
noises in the feature learning, Our multi-scale knowledge distillation are
proven to be very effective in the unsupervised re-identification. Extensive
experiments on three popular person re-identification datasets demonstrate the
superiority of our method. Especially, our approach achieves a state-of-the-art
accuracy 85.7% @mAP or 94.3% @Rank-1 on the challenging Market-1501 benchmark
with ResNet-50 under the fully unsupervised setting.

| Search Query: ArXiv Query: search_query=au:”Xiang Zhang”&id_list=&start=0&max_results=10

Read More