File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Sampling from non-log-concave distributions via stochastic variance-reduced gradient Langevin dynamics

TitleSampling from non-log-concave distributions via stochastic variance-reduced gradient Langevin dynamics
Authors
Issue Date2020
Citation
AISTATS 2019 - 22nd International Conference on Artificial Intelligence and Statistics, 2020 How to Cite?
AbstractWe study stochastic variance reduction-based Langevin dynamic algorithms, SVRG-LD and SAGA-LD (Dubey et al., 2016), for sampling from non-log-concave distributions. Under certain assumptions on the log density function, we establish the convergence guarantees of SVRG-LD and SAGA-LD in 2-Wasserstein distance. More specifically, we show that both SVRG-LD and SAGA-LD require Õ(n+n3/4/ε2+n1/2/ε4) -exp (Õ(d+γ)) stochastic gradient evaluations to achieve e-accuracy in 2-Wasserstein distance, which outperforms the Õ(n/ε4) exp (Õ(d + γ)) gradient complexity achieved by Langevin Monte Carlo Method (Raginsky et al., 2017). Experiments on synthetic data and real data back up our theory.
Persistent Identifierhttp://hdl.handle.net/10722/316526
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZou, Difan-
dc.contributor.authorXu, Pan-
dc.contributor.authorGu, Quanquan-
dc.date.accessioned2022-09-14T11:40:40Z-
dc.date.available2022-09-14T11:40:40Z-
dc.date.issued2020-
dc.identifier.citationAISTATS 2019 - 22nd International Conference on Artificial Intelligence and Statistics, 2020-
dc.identifier.urihttp://hdl.handle.net/10722/316526-
dc.description.abstractWe study stochastic variance reduction-based Langevin dynamic algorithms, SVRG-LD and SAGA-LD (Dubey et al., 2016), for sampling from non-log-concave distributions. Under certain assumptions on the log density function, we establish the convergence guarantees of SVRG-LD and SAGA-LD in 2-Wasserstein distance. More specifically, we show that both SVRG-LD and SAGA-LD require Õ(n+n3/4/ε2+n1/2/ε4) -exp (Õ(d+γ)) stochastic gradient evaluations to achieve e-accuracy in 2-Wasserstein distance, which outperforms the Õ(n/ε4) exp (Õ(d + γ)) gradient complexity achieved by Langevin Monte Carlo Method (Raginsky et al., 2017). Experiments on synthetic data and real data back up our theory.-
dc.languageeng-
dc.relation.ispartofAISTATS 2019 - 22nd International Conference on Artificial Intelligence and Statistics-
dc.titleSampling from non-log-concave distributions via stochastic variance-reduced gradient Langevin dynamics-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85071155279-
dc.identifier.isiWOS:000509687902101-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats