AutoDrop: Training Deep Learning Models with Automatic Learning Rate Drop

Kavli Affiliate: Jing Wang

| First 5 Authors: Yunfei Teng, Jing Wang, Anna Choromanska, ,

| Summary:

Modern deep learning (DL) architectures are trained using variants of the SGD
algorithm that is run with a $textit{manually}$ defined learning rate
schedule, i.e., the learning rate is dropped at the pre-defined epochs,
typically when the training loss is expected to saturate. In this paper we
develop an algorithm that realizes the learning rate drop
$textit{automatically}$. The proposed method, that we refer to as AutoDrop, is
motivated by the observation that the angular velocity of the model parameters,
i.e., the velocity of the changes of the convergence direction, for a fixed
learning rate initially increases rapidly and then progresses towards soft
saturation. At saturation the optimizer slows down thus the angular velocity
saturation is a good indicator for dropping the learning rate. After the drop,
the angular velocity "resets" and follows the previously described pattern – it
increases again until saturation. We show that our method improves over SOTA
training approaches: it accelerates the training of DL models and leads to a
better generalization. We also show that our method does not require any extra
hyperparameter tuning. AutoDrop is furthermore extremely simple to implement
and computationally cheap. Finally, we develop a theoretical framework for
analyzing our algorithm and provide convergence guarantees.

| Search Query: ArXiv Query: search_query=au:”Jing Wang”&id_list=&start=0&max_results=10

Read More

Leave a Reply