File Download

There are no files associated with this item.

Conference Paper: Understanding the Generalization of Adam in Learning Neural Networks with Proper Regularization

TitleUnderstanding the Generalization of Adam in Learning Neural Networks with Proper Regularization
Authors
Issue Date5-May-2023
Abstract

Adaptive gradient methods such as Adam have gained increasing popularity in deep learning optimization. However, it has been observed that compared with (stochastic) gradient descent, Adam can converge to a different solution with a significantly worse test error in many deep learning applications such as image classification, even with a fine-tuned regularization. In this paper, we provide a theoretical explanation for this phenomenon: we show that in the nonconvex setting of learning over-parameterized two-layer convolutional neural networks starting from the same random initialization, for a class of data distributions (inspired from image data), Adam and gradient descent (GD) can converge to different global solutions of the training objective with provably different generalization errors, even with weight decay regularization. In contrast, we show that if the training objective is convex, and the weight decay regularization is employed, any optimization algorithms including Adam and GD will converge to the same solution if the training is successful. This suggests that the inferior generalization performance of Adam is fundamentally tied to the nonconvex landscape of deep learning optimization.


Persistent Identifierhttp://hdl.handle.net/10722/338364

 

DC FieldValueLanguage
dc.contributor.authorZou, Difan-
dc.contributor.authorCao, Yuan-
dc.contributor.authorLi, Yuanzhi-
dc.contributor.authorGu, Quanquan-
dc.date.accessioned2024-03-11T10:28:19Z-
dc.date.available2024-03-11T10:28:19Z-
dc.date.issued2023-05-05-
dc.identifier.urihttp://hdl.handle.net/10722/338364-
dc.description.abstract<p>Adaptive gradient methods such as Adam have gained increasing popularity in deep learning optimization. However, it has been observed that compared with (stochastic) gradient descent, Adam can converge to a different solution with a significantly worse test error in many deep learning applications such as image classification, even with a fine-tuned regularization. In this paper, we provide a theoretical explanation for this phenomenon: we show that in the nonconvex setting of learning over-parameterized two-layer convolutional neural networks starting from the same random initialization, for a class of data distributions (inspired from image data), Adam and gradient descent (GD) can converge to different global solutions of the training objective with provably different generalization errors, even with weight decay regularization. In contrast, we show that if the training objective is convex, and the weight decay regularization is employed, any optimization algorithms including Adam and GD will converge to the same solution if the training is successful. This suggests that the inferior generalization performance of Adam is fundamentally tied to the nonconvex landscape of deep learning optimization.</p>-
dc.languageeng-
dc.relation.ispartofInternational Conference on Learning Representations (ICLR 2023) (01/05/2023-05/05/2023, Kigali, Rwanda)-
dc.titleUnderstanding the Generalization of Adam in Learning Neural Networks with Proper Regularization-
dc.typeConference_Paper-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats