File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Closing the generalization gap of adaptive gradient methods in training deep neural networks

TitleClosing the generalization gap of adaptive gradient methods in training deep neural networks
Authors
Issue Date2020
Citation
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020, p. 3267-3275 How to Cite?
AbstractAdaptive gradient methods, which adopt historical gradient information to automatically adjust the learning rate, despite the nice property of fast convergence, have been observed to generalize worse than stochastic gradient descent (SGD) with momentum in training deep neural networks. This leaves how to close the generalization gap of adaptive gradient methods an open problem. In this work, we show that adaptive gradient methods such as Adam, Amsgrad, are sometimes “over adapted”. We design a new algorithm, called Partially adaptive momentum estimation method, which unifies the Adam/Amsgrad with SGD by introducing a partial adaptive parameter p, to achieve the best from both worlds. We also prove the convergence rate of our proposed algorithm to a stationary point in the stochastic nonconvex optimization setting. Experiments on standard benchmarks show that our proposed algorithm can maintain fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks. These results would suggest practitioners pick up adaptive gradient methods once again for faster training of deep neural networks.
Persistent Identifierhttp://hdl.handle.net/10722/303708
ISSN
2020 SCImago Journal Rankings: 0.649

 

DC FieldValueLanguage
dc.contributor.authorChen, Jinghui-
dc.contributor.authorZhou, Dongruo-
dc.contributor.authorTang, Yiqi-
dc.contributor.authorYang, Ziyan-
dc.contributor.authorCao, Yuan-
dc.contributor.authorGu, Quanquan-
dc.date.accessioned2021-09-15T08:25:51Z-
dc.date.available2021-09-15T08:25:51Z-
dc.date.issued2020-
dc.identifier.citationProceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020, p. 3267-3275-
dc.identifier.issn1045-0823-
dc.identifier.urihttp://hdl.handle.net/10722/303708-
dc.description.abstractAdaptive gradient methods, which adopt historical gradient information to automatically adjust the learning rate, despite the nice property of fast convergence, have been observed to generalize worse than stochastic gradient descent (SGD) with momentum in training deep neural networks. This leaves how to close the generalization gap of adaptive gradient methods an open problem. In this work, we show that adaptive gradient methods such as Adam, Amsgrad, are sometimes “over adapted”. We design a new algorithm, called Partially adaptive momentum estimation method, which unifies the Adam/Amsgrad with SGD by introducing a partial adaptive parameter p, to achieve the best from both worlds. We also prove the convergence rate of our proposed algorithm to a stationary point in the stochastic nonconvex optimization setting. Experiments on standard benchmarks show that our proposed algorithm can maintain fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks. These results would suggest practitioners pick up adaptive gradient methods once again for faster training of deep neural networks.-
dc.languageeng-
dc.relation.ispartofProceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence-
dc.titleClosing the generalization gap of adaptive gradient methods in training deep neural networks-
dc.typeConference_Paper-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.doi10.24963/ijcai.2020/452-
dc.identifier.scopuseid_2-s2.0-85095192038-
dc.identifier.spage3267-
dc.identifier.epage3275-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats