File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Stochastic variance-reduced Hamilton Monte Carlo methods

TitleStochastic variance-reduced Hamilton Monte Carlo methods
Authors
Issue Date2018
Citation
35th International Conference on Machine Learning, ICML 2018, 2018, v. 13, p. 9647-9656 How to Cite?
AbstractWe propose a fast stochastic Hamilton Monte Carlo (HMC) method, for sampling from a smooth and strongly log-concave distribution. At the core of our proposed method is a variance reduction technique inspired by the recent advance in stochastic optimization. We show that, to achieve e accuracy in 2-Wasserstein distance, our algorithm achieves Õ(n + κ2d1/2/ϵ + κ4/3d1/3n2/3/ϵ2/3) gradient complexity (i.e., number of component gradient evaluations), which outperforms the state-of-the-art HMC and stochastic gradient HMC methods in a wide regime. We also extend our algorithm for sampling from smooth and general log-concave distributions, and prove the corresponding gradient complexity as well. Experiments on both synthetic and real data demonstrate the superior performance of our algorithm.
Persistent Identifierhttp://hdl.handle.net/10722/316505
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZou, Difan-
dc.contributor.authorXu, Pan-
dc.contributor.authorGu, Quanquan-
dc.date.accessioned2022-09-14T11:40:38Z-
dc.date.available2022-09-14T11:40:38Z-
dc.date.issued2018-
dc.identifier.citation35th International Conference on Machine Learning, ICML 2018, 2018, v. 13, p. 9647-9656-
dc.identifier.urihttp://hdl.handle.net/10722/316505-
dc.description.abstractWe propose a fast stochastic Hamilton Monte Carlo (HMC) method, for sampling from a smooth and strongly log-concave distribution. At the core of our proposed method is a variance reduction technique inspired by the recent advance in stochastic optimization. We show that, to achieve e accuracy in 2-Wasserstein distance, our algorithm achieves Õ(n + κ2d1/2/ϵ + κ4/3d1/3n2/3/ϵ2/3) gradient complexity (i.e., number of component gradient evaluations), which outperforms the state-of-the-art HMC and stochastic gradient HMC methods in a wide regime. We also extend our algorithm for sampling from smooth and general log-concave distributions, and prove the corresponding gradient complexity as well. Experiments on both synthetic and real data demonstrate the superior performance of our algorithm.-
dc.languageeng-
dc.relation.ispartof35th International Conference on Machine Learning, ICML 2018-
dc.titleStochastic variance-reduced Hamilton Monte Carlo methods-
dc.typeConference_Paper-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.scopuseid_2-s2.0-85057281190-
dc.identifier.volume13-
dc.identifier.spage9647-
dc.identifier.epage9656-
dc.identifier.isiWOS:000683379206017-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats