File Download
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Faster Convergence of Stochastic Gradient Langevin Dynamics for Non-Log-Concave Sampling
Title | Faster Convergence of Stochastic Gradient Langevin Dynamics for Non-Log-Concave Sampling |
---|---|
Authors | |
Issue Date | 2021 |
Citation | 37th Conference on Uncertainty in Artificial Intelligence, UAI 2021, 2021, p. 1152-1162 How to Cite? |
Abstract | We provide a new convergence analysis of stochastic gradient Langevin dynamics (SGLD) for sampling from a class of distributions that can be non-log-concave. At the core of our approach is a novel conductance analysis of SGLD using an auxiliary time-reversible Markov Chain. Under certain conditions on the target distribution, we prove that Oe(d4є−2) stochastic gradient evaluations suffice to guarantee є-sampling error in terms of the total variation distance, where d is the problem dimension. This improves existing results on the convergence rate of SGLD [Raginsky et al., 2017, Xu et al., 2018]. We further show that provided an additional Hessian Lipschitz condition on the log-density function, SGLD is guaranteed to achieve є-sampling error within Oe(d15/4є−3/2) stochastic gradient evaluations. Our proof technique provides a new way to study the convergence of Langevin based algorithms, and sheds some light on the design of fast stochastic gradient based sampling algorithms. |
Persistent Identifier | http://hdl.handle.net/10722/316647 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zou, Difan | - |
dc.contributor.author | Xu, Pan | - |
dc.contributor.author | Gu, Quanquan | - |
dc.date.accessioned | 2022-09-14T11:40:57Z | - |
dc.date.available | 2022-09-14T11:40:57Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | 37th Conference on Uncertainty in Artificial Intelligence, UAI 2021, 2021, p. 1152-1162 | - |
dc.identifier.uri | http://hdl.handle.net/10722/316647 | - |
dc.description.abstract | We provide a new convergence analysis of stochastic gradient Langevin dynamics (SGLD) for sampling from a class of distributions that can be non-log-concave. At the core of our approach is a novel conductance analysis of SGLD using an auxiliary time-reversible Markov Chain. Under certain conditions on the target distribution, we prove that Oe(d4є−2) stochastic gradient evaluations suffice to guarantee є-sampling error in terms of the total variation distance, where d is the problem dimension. This improves existing results on the convergence rate of SGLD [Raginsky et al., 2017, Xu et al., 2018]. We further show that provided an additional Hessian Lipschitz condition on the log-density function, SGLD is guaranteed to achieve є-sampling error within Oe(d15/4є−3/2) stochastic gradient evaluations. Our proof technique provides a new way to study the convergence of Langevin based algorithms, and sheds some light on the design of fast stochastic gradient based sampling algorithms. | - |
dc.language | eng | - |
dc.relation.ispartof | 37th Conference on Uncertainty in Artificial Intelligence, UAI 2021 | - |
dc.title | Faster Convergence of Stochastic Gradient Langevin Dynamics for Non-Log-Concave Sampling | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_OA_fulltext | - |
dc.identifier.scopus | eid_2-s2.0-85124298758 | - |
dc.identifier.spage | 1152 | - |
dc.identifier.epage | 1162 | - |