File Download
Links for fulltext
(May Require Subscription)
- Scopus: eid_2-s2.0-85090170605
- WOS: WOS:000535866906045
- Find via
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Algorithm-dependent generalization bounds for overparameterized deep residual networks
Title | Algorithm-dependent generalization bounds for overparameterized deep residual networks |
---|---|
Authors | |
Issue Date | 2019 |
Citation | 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada, 8-14 Decemeber 2019. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019), 2020 How to Cite? |
Abstract | The skip-connections used in residual networks have become a standard architecture choice in deep learning due to the increased training stability and generalization performance with this architecture, although there has been limited theoretical understanding for this improvement. In this work, we analyze overparameterized deep residual networks trained by gradient descent following random initialization, and demonstrate that (i) the class of networks learned by gradient descent constitutes a small subset of the entire neural network function class, and (ii) this subclass of networks is sufficiently large to guarantee small training error. By showing (i) we are able to demonstrate that deep residual networks trained with gradient descent have a small generalization gap between training and test error, and together with (ii) this guarantees that the test error will be small. Our optimization and generalization guarantees require overparameterization that is only logarithmic in the depth of the network, while all known generalization bounds for deep non-residual networks have overparameterization requirements that are at least polynomial in the depth. This provides an explanation for why residual networks are preferable to non-residual ones. |
Persistent Identifier | http://hdl.handle.net/10722/303693 |
ISSN | 2020 SCImago Journal Rankings: 1.399 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Frei, Spencer | - |
dc.contributor.author | Cao, Yuan | - |
dc.contributor.author | Gu, Quanquan | - |
dc.date.accessioned | 2021-09-15T08:25:50Z | - |
dc.date.available | 2021-09-15T08:25:50Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada, 8-14 Decemeber 2019. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019), 2020 | - |
dc.identifier.issn | 1049-5258 | - |
dc.identifier.uri | http://hdl.handle.net/10722/303693 | - |
dc.description.abstract | The skip-connections used in residual networks have become a standard architecture choice in deep learning due to the increased training stability and generalization performance with this architecture, although there has been limited theoretical understanding for this improvement. In this work, we analyze overparameterized deep residual networks trained by gradient descent following random initialization, and demonstrate that (i) the class of networks learned by gradient descent constitutes a small subset of the entire neural network function class, and (ii) this subclass of networks is sufficiently large to guarantee small training error. By showing (i) we are able to demonstrate that deep residual networks trained with gradient descent have a small generalization gap between training and test error, and together with (ii) this guarantees that the test error will be small. Our optimization and generalization guarantees require overparameterization that is only logarithmic in the depth of the network, while all known generalization bounds for deep non-residual networks have overparameterization requirements that are at least polynomial in the depth. This provides an explanation for why residual networks are preferable to non-residual ones. | - |
dc.language | eng | - |
dc.relation.ispartof | Advances in Neural Information Processing Systems 32 (NeurIPS 2019) | - |
dc.title | Algorithm-dependent generalization bounds for overparameterized deep residual networks | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_OA_fulltext | - |
dc.identifier.scopus | eid_2-s2.0-85090170605 | - |
dc.identifier.isi | WOS:000535866906045 | - |