File Download
Links for fulltext
(May Require Subscription)
- Scopus: eid_2-s2.0-85063009242
- WOS: WOS:000461823303015
- Find via
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Global convergence of Langevin dynamics based algorithms for nonconvex optimization
Title | Global convergence of Langevin dynamics based algorithms for nonconvex optimization |
---|---|
Authors | |
Issue Date | 2018 |
Citation | Advances in Neural Information Processing Systems, 2018, v. 2018-December, p. 3122-3133 How to Cite? |
Abstract | We present a unified framework to analyze the global convergence of Langevin dynamics based algorithms for nonconvex finite-sum optimization with n component functions. At the core of our analysis is a direct analysis of the ergodicity of the numerical approximations to Langevin dynamics, which leads to faster convergence rates. Specifically, we show that gradient Langevin dynamics (GLD) and stochastic gradient Langevin dynamics (SGLD) converge to the almost minimizer2 within Õe(nd/(λε)) and Õe(d7/(λ5ε5)) stochastic gradient evaluations respectively3, where d is the problem dimension, and λ is the spectral gap of the Markov chain generated by GLD. Both results improve upon the best known gradient complexity4 results [45]. Furthermore, for the first time we prove the global convergence guarantee for variance reduced stochastic gradient Langevin dynamics (SVRG-LD) to the almost minimizer within Õe(pnd5/(λ4ε5/2)) stochastic gradient evaluations, which outperforms the gradient complexities of GLD and SGLD in a wide regime. Our theoretical analyses shed some light on using Langevin dynamics based algorithms for nonconvex optimization with provable guarantees. |
Persistent Identifier | http://hdl.handle.net/10722/316515 |
ISSN | 2020 SCImago Journal Rankings: 1.399 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Xu, Pan | - |
dc.contributor.author | Zou, Difan | - |
dc.contributor.author | Chen, Jinghui | - |
dc.contributor.author | Gu, Quanquan | - |
dc.date.accessioned | 2022-09-14T11:40:39Z | - |
dc.date.available | 2022-09-14T11:40:39Z | - |
dc.date.issued | 2018 | - |
dc.identifier.citation | Advances in Neural Information Processing Systems, 2018, v. 2018-December, p. 3122-3133 | - |
dc.identifier.issn | 1049-5258 | - |
dc.identifier.uri | http://hdl.handle.net/10722/316515 | - |
dc.description.abstract | We present a unified framework to analyze the global convergence of Langevin dynamics based algorithms for nonconvex finite-sum optimization with n component functions. At the core of our analysis is a direct analysis of the ergodicity of the numerical approximations to Langevin dynamics, which leads to faster convergence rates. Specifically, we show that gradient Langevin dynamics (GLD) and stochastic gradient Langevin dynamics (SGLD) converge to the almost minimizer2 within Õe(nd/(λε)) and Õe(d7/(λ5ε5)) stochastic gradient evaluations respectively3, where d is the problem dimension, and λ is the spectral gap of the Markov chain generated by GLD. Both results improve upon the best known gradient complexity4 results [45]. Furthermore, for the first time we prove the global convergence guarantee for variance reduced stochastic gradient Langevin dynamics (SVRG-LD) to the almost minimizer within Õe(pnd5/(λ4ε5/2)) stochastic gradient evaluations, which outperforms the gradient complexities of GLD and SGLD in a wide regime. Our theoretical analyses shed some light on using Langevin dynamics based algorithms for nonconvex optimization with provable guarantees. | - |
dc.language | eng | - |
dc.relation.ispartof | Advances in Neural Information Processing Systems | - |
dc.title | Global convergence of Langevin dynamics based algorithms for nonconvex optimization | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_OA_fulltext | - |
dc.identifier.scopus | eid_2-s2.0-85063009242 | - |
dc.identifier.volume | 2018-December | - |
dc.identifier.spage | 3122 | - |
dc.identifier.epage | 3133 | - |
dc.identifier.isi | WOS:000461823303015 | - |