File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Global convergence of Langevin dynamics based algorithms for nonconvex optimization

TitleGlobal convergence of Langevin dynamics based algorithms for nonconvex optimization
Authors
Issue Date2018
Citation
Advances in Neural Information Processing Systems, 2018, v. 2018-December, p. 3122-3133 How to Cite?
AbstractWe present a unified framework to analyze the global convergence of Langevin dynamics based algorithms for nonconvex finite-sum optimization with n component functions. At the core of our analysis is a direct analysis of the ergodicity of the numerical approximations to Langevin dynamics, which leads to faster convergence rates. Specifically, we show that gradient Langevin dynamics (GLD) and stochastic gradient Langevin dynamics (SGLD) converge to the almost minimizer2 within Õe(nd/(λε)) and Õe(d7/(λ5ε5)) stochastic gradient evaluations respectively3, where d is the problem dimension, and λ is the spectral gap of the Markov chain generated by GLD. Both results improve upon the best known gradient complexity4 results [45]. Furthermore, for the first time we prove the global convergence guarantee for variance reduced stochastic gradient Langevin dynamics (SVRG-LD) to the almost minimizer within Õe(pnd5/(λ4ε5/2)) stochastic gradient evaluations, which outperforms the gradient complexities of GLD and SGLD in a wide regime. Our theoretical analyses shed some light on using Langevin dynamics based algorithms for nonconvex optimization with provable guarantees.
Persistent Identifierhttp://hdl.handle.net/10722/316515
ISSN
2020 SCImago Journal Rankings: 1.399
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorXu, Pan-
dc.contributor.authorZou, Difan-
dc.contributor.authorChen, Jinghui-
dc.contributor.authorGu, Quanquan-
dc.date.accessioned2022-09-14T11:40:39Z-
dc.date.available2022-09-14T11:40:39Z-
dc.date.issued2018-
dc.identifier.citationAdvances in Neural Information Processing Systems, 2018, v. 2018-December, p. 3122-3133-
dc.identifier.issn1049-5258-
dc.identifier.urihttp://hdl.handle.net/10722/316515-
dc.description.abstractWe present a unified framework to analyze the global convergence of Langevin dynamics based algorithms for nonconvex finite-sum optimization with n component functions. At the core of our analysis is a direct analysis of the ergodicity of the numerical approximations to Langevin dynamics, which leads to faster convergence rates. Specifically, we show that gradient Langevin dynamics (GLD) and stochastic gradient Langevin dynamics (SGLD) converge to the almost minimizer2 within Õe(nd/(λε)) and Õe(d7/(λ5ε5)) stochastic gradient evaluations respectively3, where d is the problem dimension, and λ is the spectral gap of the Markov chain generated by GLD. Both results improve upon the best known gradient complexity4 results [45]. Furthermore, for the first time we prove the global convergence guarantee for variance reduced stochastic gradient Langevin dynamics (SVRG-LD) to the almost minimizer within Õe(pnd5/(λ4ε5/2)) stochastic gradient evaluations, which outperforms the gradient complexities of GLD and SGLD in a wide regime. Our theoretical analyses shed some light on using Langevin dynamics based algorithms for nonconvex optimization with provable guarantees.-
dc.languageeng-
dc.relation.ispartofAdvances in Neural Information Processing Systems-
dc.titleGlobal convergence of Langevin dynamics based algorithms for nonconvex optimization-
dc.typeConference_Paper-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.scopuseid_2-s2.0-85063009242-
dc.identifier.volume2018-December-
dc.identifier.spage3122-
dc.identifier.epage3133-
dc.identifier.isiWOS:000461823303015-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats