File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Subsampled stochastic variance-reduced gradient langevin dynamics

TitleSubsampled stochastic variance-reduced gradient langevin dynamics
Authors
Issue Date2018
Citation
34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018, 2018, v. 1, p. 508-518 How to Cite?
AbstractStochastic variance-reduced gradient Langevin dynamics (SVRG-LD) was recently proposed to improve the performance of stochastic gradient Langevin dynamics (SGLD) by reducing the variance of the stochastic gradient. In this paper, we propose a variant of SVRG-LD, namely SVRG-LD + , which replaces the full gradient in each epoch with a subsampled one. We provide a nonasymptotic analysis of the convergence of SVRG-LD + in 2-Wasserstein distance, and show that SVRG-LD + enjoys a lower gradient complexity 1 than SVRG-LD, when the sample size is large or the target accuracy requirement is moderate. Our analysis directly implies a sharper convergence rate for SVRG-LD, which improves the existing convergence rate by a factor of κ 1/6 n 1/6 , where κ is the condition number of the log-density function and n is the sample size. Experiments on both synthetic and real-world datasets validate our theoretical results.
Persistent Identifierhttp://hdl.handle.net/10722/316608
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZou, Difan-
dc.contributor.authorXu, Pan-
dc.contributor.authorGu, Quanquan-
dc.date.accessioned2022-09-14T11:40:52Z-
dc.date.available2022-09-14T11:40:52Z-
dc.date.issued2018-
dc.identifier.citation34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018, 2018, v. 1, p. 508-518-
dc.identifier.urihttp://hdl.handle.net/10722/316608-
dc.description.abstractStochastic variance-reduced gradient Langevin dynamics (SVRG-LD) was recently proposed to improve the performance of stochastic gradient Langevin dynamics (SGLD) by reducing the variance of the stochastic gradient. In this paper, we propose a variant of SVRG-LD, namely SVRG-LD + , which replaces the full gradient in each epoch with a subsampled one. We provide a nonasymptotic analysis of the convergence of SVRG-LD + in 2-Wasserstein distance, and show that SVRG-LD + enjoys a lower gradient complexity 1 than SVRG-LD, when the sample size is large or the target accuracy requirement is moderate. Our analysis directly implies a sharper convergence rate for SVRG-LD, which improves the existing convergence rate by a factor of κ 1/6 n 1/6 , where κ is the condition number of the log-density function and n is the sample size. Experiments on both synthetic and real-world datasets validate our theoretical results.-
dc.languageeng-
dc.relation.ispartof34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018-
dc.titleSubsampled stochastic variance-reduced gradient langevin dynamics-
dc.typeConference_Paper-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.scopuseid_2-s2.0-85059342905-
dc.identifier.volume1-
dc.identifier.spage508-
dc.identifier.epage518-
dc.identifier.isiWOS:000493119200050-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats