File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1609/aaai.v34i04.5736
- Scopus: eid_2-s2.0-85093413639
- WOS: WOS:000667722803052
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Generalization error bounds of gradient descent for learning over-parameterized deep relu networks
Title | Generalization error bounds of gradient descent for learning over-parameterized deep relu networks |
---|---|
Authors | |
Issue Date | 2020 |
Citation | Proceedings of the AAAI Conference on Artificial Intelligence, 2020, v. 34, n. 4, p. 3349-3356 How to Cite? |
Abstract | Empirical studies show that gradient-based methods can learn deep neural networks (DNNs) with very good generalization performance in the over-parameterization regime, where DNNs can easily fit a random labeling of the training data. Very recently, a line of work explains in theory that with overparameterization and proper random initialization, gradientbased methods can find the global minima of the training loss for DNNs. However, existing generalization error bounds are unable to explain the good generalization performance of over-parameterized DNNs. The major limitation of most existing generalization bounds is that they are based on uniform convergence and are independent of the training algorithm. In this work, we derive an algorithm-dependent generalization error bound for deep ReLU networks, and show that under certain assumptions on the data distribution, gradient descent (GD) with proper random initialization is able to train a sufficiently over-parameterized DNN to achieve arbitrarily small generalization error. Our work sheds light on explaining the good generalization performance of over-parameterized deep neural networks. |
Persistent Identifier | http://hdl.handle.net/10722/303702 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Cao, Yuan | - |
dc.contributor.author | Gu, Quanquan | - |
dc.date.accessioned | 2021-09-15T08:25:51Z | - |
dc.date.available | 2021-09-15T08:25:51Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | Proceedings of the AAAI Conference on Artificial Intelligence, 2020, v. 34, n. 4, p. 3349-3356 | - |
dc.identifier.uri | http://hdl.handle.net/10722/303702 | - |
dc.description.abstract | Empirical studies show that gradient-based methods can learn deep neural networks (DNNs) with very good generalization performance in the over-parameterization regime, where DNNs can easily fit a random labeling of the training data. Very recently, a line of work explains in theory that with overparameterization and proper random initialization, gradientbased methods can find the global minima of the training loss for DNNs. However, existing generalization error bounds are unable to explain the good generalization performance of over-parameterized DNNs. The major limitation of most existing generalization bounds is that they are based on uniform convergence and are independent of the training algorithm. In this work, we derive an algorithm-dependent generalization error bound for deep ReLU networks, and show that under certain assumptions on the data distribution, gradient descent (GD) with proper random initialization is able to train a sufficiently over-parameterized DNN to achieve arbitrarily small generalization error. Our work sheds light on explaining the good generalization performance of over-parameterized deep neural networks. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the AAAI Conference on Artificial Intelligence | - |
dc.title | Generalization error bounds of gradient descent for learning over-parameterized deep relu networks | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_OA_fulltext | - |
dc.identifier.doi | 10.1609/aaai.v34i04.5736 | - |
dc.identifier.scopus | eid_2-s2.0-85093413639 | - |
dc.identifier.volume | 34 | - |
dc.identifier.issue | 4 | - |
dc.identifier.spage | 3349 | - |
dc.identifier.epage | 3356 | - |
dc.identifier.isi | WOS:000667722803052 | - |