File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Learning and generalization in overparameterized neural networks, going beyond two layers

TitleLearning and generalization in overparameterized neural networks, going beyond two layers
Authors
Issue Date2019
Citation
Advances in Neural Information Processing Systems, 2019, v. 32 How to Cite?
AbstractThe fundamental learning theory behind neural networks remains largely open. What classes of functions can neural networks actually learn? Why doesn't the trained network overfit when it is overparameterized? In this work, we prove that overparameterized neural networks can learn some notable concept classes, including two and three-layer networks with fewer parameters and smooth activations. Moreover, the learning can be simply done by SGD (stochastic gradient descent) or its variants in polynomial time using polynomially many samples. The sample complexity can also be almost independent of the number of parameters in the network. On the technique side, our analysis goes beyond the so-called NTK (neural tangent kernel) linearization of neural networks in prior works. We establish a new notion of quadratic approximation of the neural network, and connect it to the SGD theory of escaping saddle points.
Persistent Identifierhttp://hdl.handle.net/10722/341279
ISSN
2020 SCImago Journal Rankings: 1.399

 

DC FieldValueLanguage
dc.contributor.authorAllen-Zhu, Zeyuan-
dc.contributor.authorLi, Yuanzhi-
dc.contributor.authorLiang, Yingyu-
dc.date.accessioned2024-03-13T08:41:34Z-
dc.date.available2024-03-13T08:41:34Z-
dc.date.issued2019-
dc.identifier.citationAdvances in Neural Information Processing Systems, 2019, v. 32-
dc.identifier.issn1049-5258-
dc.identifier.urihttp://hdl.handle.net/10722/341279-
dc.description.abstractThe fundamental learning theory behind neural networks remains largely open. What classes of functions can neural networks actually learn? Why doesn't the trained network overfit when it is overparameterized? In this work, we prove that overparameterized neural networks can learn some notable concept classes, including two and three-layer networks with fewer parameters and smooth activations. Moreover, the learning can be simply done by SGD (stochastic gradient descent) or its variants in polynomial time using polynomially many samples. The sample complexity can also be almost independent of the number of parameters in the network. On the technique side, our analysis goes beyond the so-called NTK (neural tangent kernel) linearization of neural networks in prior works. We establish a new notion of quadratic approximation of the neural network, and connect it to the SGD theory of escaping saddle points.-
dc.languageeng-
dc.relation.ispartofAdvances in Neural Information Processing Systems-
dc.titleLearning and generalization in overparameterized neural networks, going beyond two layers-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85087338191-
dc.identifier.volume32-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats