File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Faster Convergence of Stochastic Gradient Langevin Dynamics for Non-Log-Concave Sampling

TitleFaster Convergence of Stochastic Gradient Langevin Dynamics for Non-Log-Concave Sampling
Authors
Issue Date2021
Citation
37th Conference on Uncertainty in Artificial Intelligence, UAI 2021, 2021, p. 1152-1162 How to Cite?
AbstractWe provide a new convergence analysis of stochastic gradient Langevin dynamics (SGLD) for sampling from a class of distributions that can be non-log-concave. At the core of our approach is a novel conductance analysis of SGLD using an auxiliary time-reversible Markov Chain. Under certain conditions on the target distribution, we prove that Oe(d4є−2) stochastic gradient evaluations suffice to guarantee є-sampling error in terms of the total variation distance, where d is the problem dimension. This improves existing results on the convergence rate of SGLD [Raginsky et al., 2017, Xu et al., 2018]. We further show that provided an additional Hessian Lipschitz condition on the log-density function, SGLD is guaranteed to achieve є-sampling error within Oe(d15/4є−3/2) stochastic gradient evaluations. Our proof technique provides a new way to study the convergence of Langevin based algorithms, and sheds some light on the design of fast stochastic gradient based sampling algorithms.
Persistent Identifierhttp://hdl.handle.net/10722/316647

 

DC FieldValueLanguage
dc.contributor.authorZou, Difan-
dc.contributor.authorXu, Pan-
dc.contributor.authorGu, Quanquan-
dc.date.accessioned2022-09-14T11:40:57Z-
dc.date.available2022-09-14T11:40:57Z-
dc.date.issued2021-
dc.identifier.citation37th Conference on Uncertainty in Artificial Intelligence, UAI 2021, 2021, p. 1152-1162-
dc.identifier.urihttp://hdl.handle.net/10722/316647-
dc.description.abstractWe provide a new convergence analysis of stochastic gradient Langevin dynamics (SGLD) for sampling from a class of distributions that can be non-log-concave. At the core of our approach is a novel conductance analysis of SGLD using an auxiliary time-reversible Markov Chain. Under certain conditions on the target distribution, we prove that Oe(d4є−2) stochastic gradient evaluations suffice to guarantee є-sampling error in terms of the total variation distance, where d is the problem dimension. This improves existing results on the convergence rate of SGLD [Raginsky et al., 2017, Xu et al., 2018]. We further show that provided an additional Hessian Lipschitz condition on the log-density function, SGLD is guaranteed to achieve є-sampling error within Oe(d15/4є−3/2) stochastic gradient evaluations. Our proof technique provides a new way to study the convergence of Langevin based algorithms, and sheds some light on the design of fast stochastic gradient based sampling algorithms.-
dc.languageeng-
dc.relation.ispartof37th Conference on Uncertainty in Artificial Intelligence, UAI 2021-
dc.titleFaster Convergence of Stochastic Gradient Langevin Dynamics for Non-Log-Concave Sampling-
dc.typeConference_Paper-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.scopuseid_2-s2.0-85124298758-
dc.identifier.spage1152-
dc.identifier.epage1162-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats