File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Stability and Generalization Analysis of Gradient Methods for Shallow Neural Networks

TitleStability and Generalization Analysis of Gradient Methods for Shallow Neural Networks
Authors
Issue Date2022
Citation
Advances in Neural Information Processing Systems, 2022, v. 35 How to Cite?
AbstractWhile significant theoretical progress has been achieved, unveiling the generalization mystery of overparameterized neural networks still remains largely elusive. In this paper, we study the generalization behavior of shallow neural networks (SNNs) by leveraging the concept of algorithmic stability. We consider gradient descent (GD) and stochastic gradient descent (SGD) to train SNNs, for both of which we develop consistent excess risk bounds by balancing the optimization and generalization via early-stopping. As compared to existing analysis on GD, our new analysis requires a relaxed overparameterization assumption and also applies to SGD. The key for the improvement is a better estimation of the smallest eigenvalues of the Hessian matrices of the empirical risks and the loss function along the trajectories of GD and SGD by providing a refined estimation of their iterates.
Persistent Identifierhttp://hdl.handle.net/10722/329927
ISSN
2020 SCImago Journal Rankings: 1.399

 

DC FieldValueLanguage
dc.contributor.authorLei, Yunwen-
dc.contributor.authorJin, Rong-
dc.contributor.authorYing, Yiming-
dc.date.accessioned2023-08-09T03:36:30Z-
dc.date.available2023-08-09T03:36:30Z-
dc.date.issued2022-
dc.identifier.citationAdvances in Neural Information Processing Systems, 2022, v. 35-
dc.identifier.issn1049-5258-
dc.identifier.urihttp://hdl.handle.net/10722/329927-
dc.description.abstractWhile significant theoretical progress has been achieved, unveiling the generalization mystery of overparameterized neural networks still remains largely elusive. In this paper, we study the generalization behavior of shallow neural networks (SNNs) by leveraging the concept of algorithmic stability. We consider gradient descent (GD) and stochastic gradient descent (SGD) to train SNNs, for both of which we develop consistent excess risk bounds by balancing the optimization and generalization via early-stopping. As compared to existing analysis on GD, our new analysis requires a relaxed overparameterization assumption and also applies to SGD. The key for the improvement is a better estimation of the smallest eigenvalues of the Hessian matrices of the empirical risks and the loss function along the trajectories of GD and SGD by providing a refined estimation of their iterates.-
dc.languageeng-
dc.relation.ispartofAdvances in Neural Information Processing Systems-
dc.titleStability and Generalization Analysis of Gradient Methods for Shallow Neural Networks-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85148766126-
dc.identifier.volume35-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats