File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: An improved analysis of training over-parameterized deep neural networks

TitleAn improved analysis of training over-parameterized deep neural networks
Authors
Issue Date2019
Citation
Advances in Neural Information Processing Systems, 2019, v. 32 How to Cite?
AbstractA recent line of research has shown that gradient-based algorithms with random initialization can converge to the global minima of the training loss for overparameterized (i.e., sufficiently wide) deep neural networks. However, the condition on the width of the neural network to ensure the global convergence is very stringent, which is often a high-degree polynomial in the training sample size n (e.g., O(n24)). In this paper, we provide an improved analysis of the global convergence of (stochastic) gradient descent for training deep neural networks, which only requires a milder over-parameterization condition than previous work in terms of the training sample size and other problem-dependent parameters. The main technical contributions of our analysis include (a) a tighter gradient lower bound that leads to a faster convergence of the algorithm, and (b) a sharper characterization of the trajectory length of the algorithm. By specializing our result to two-layer (i.e., one-hidden-layer) neural networks, it also provides a milder over-parameterization condition than the best-known result in prior work.
Persistent Identifierhttp://hdl.handle.net/10722/316554
ISSN
2020 SCImago Journal Rankings: 1.399
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZou, Difan-
dc.contributor.authorGu, Quanquan-
dc.date.accessioned2022-09-14T11:40:44Z-
dc.date.available2022-09-14T11:40:44Z-
dc.date.issued2019-
dc.identifier.citationAdvances in Neural Information Processing Systems, 2019, v. 32-
dc.identifier.issn1049-5258-
dc.identifier.urihttp://hdl.handle.net/10722/316554-
dc.description.abstractA recent line of research has shown that gradient-based algorithms with random initialization can converge to the global minima of the training loss for overparameterized (i.e., sufficiently wide) deep neural networks. However, the condition on the width of the neural network to ensure the global convergence is very stringent, which is often a high-degree polynomial in the training sample size n (e.g., O(n24)). In this paper, we provide an improved analysis of the global convergence of (stochastic) gradient descent for training deep neural networks, which only requires a milder over-parameterization condition than previous work in terms of the training sample size and other problem-dependent parameters. The main technical contributions of our analysis include (a) a tighter gradient lower bound that leads to a faster convergence of the algorithm, and (b) a sharper characterization of the trajectory length of the algorithm. By specializing our result to two-layer (i.e., one-hidden-layer) neural networks, it also provides a milder over-parameterization condition than the best-known result in prior work.-
dc.languageeng-
dc.relation.ispartofAdvances in Neural Information Processing Systems-
dc.titleAn improved analysis of training over-parameterized deep neural networks-
dc.typeConference_Paper-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.scopuseid_2-s2.0-85090172477-
dc.identifier.volume32-
dc.identifier.isiWOS:000534424302009-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats