Low-Pass Filtering SGD for Recovering Flat Optima in the Deep Learning Optimization Landscape

Kavli Affiliate: Jing Wang

| First 5 Authors: Devansh Bisla, Jing Wang, Anna Choromanska, ,

| Summary:

In this paper, we study the sharpness of a deep learning (DL) loss landscape
around local minima in order to reveal systematic mechanisms underlying the
generalization abilities of DL models. Our analysis is performed across varying
network and optimizer hyper-parameters, and involves a rich family of different
sharpness measures. We compare these measures and show that the low-pass
filter-based measure exhibits the highest correlation with the generalization
abilities of DL models, has high robustness to both data and label noise, and
furthermore can track the double descent behavior for neural networks. We next
derive the optimization algorithm, relying on the low-pass filter (LPF), that
actively searches the flat regions in the DL optimization landscape using
SGD-like procedure. The update of the proposed algorithm, that we call LPF-SGD,
is determined by the gradient of the convolution of the filter kernel with the
loss function and can be efficiently computed using MC sampling. We empirically
show that our algorithm achieves superior generalization performance compared
to the common DL training strategies. On the theoretical front, we prove that
LPF-SGD converges to a better optimal point with smaller generalization error
than SGD.

| Search Query: ArXiv Query: search_query=au:”Jing Wang”&id_list=&start=0&max_results=10

Read More

Leave a Reply