Kavli Affiliate: Yi Zhou
| First 5 Authors: Lei Huang, Yi Zhou, Tian Wang, Jie Luo, Xianglong Liu
| Summary:
Batch normalization (BN) is a milestone technique in deep learning. It
normalizes the activation using mini-batch statistics during training but the
estimated population statistics during inference. This paper focuses on
investigating the estimation of population statistics. We define the estimation
shift magnitude of BN to quantitatively measure the difference between its
estimated population statistics and expected ones. Our primary observation is
that the estimation shift can be accumulated due to the stack of BN in a
network, which has detriment effects for the test performance. We further find
a batch-free normalization (BFN) can block such an accumulation of estimation
shift. These observations motivate our design of XBNBlock that replace one BN
with BFN in the bottleneck block of residual-style networks. Experiments on the
ImageNet and COCO benchmarks show that XBNBlock consistently improves the
performance of different architectures, including ResNet and ResNeXt, by a
significant margin and seems to be more robust to distribution shift.
| Search Query: ArXiv Query: search_query=au:”Yi Zhou”&id_list=&start=0&max_results=10