File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1360/SSI-2021-0127
- Scopus: eid_2-s2.0-85127464169
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Distributed Metropolis algorithm: convergence condition and optimal parallel speed-up
Title | Distributed Metropolis algorithm: convergence condition and optimal parallel speed-up |
---|---|
Authors | |
Keywords | Coupling Distributed sampling Markov chain Monte Carlo Mixing time Spin system |
Issue Date | 2022 |
Citation | Scientia Sinica Informationis, 2022, v. 52, n. 2, p. 287-313 How to Cite? |
Abstract | The Metropolis algorithm is a fundamental Markov chain Monte Carlo (MCMC) sampling technique, which can be used for drawing random samples from high-dimensional probability distributions (i.e., Gibbs distributions) represented by probabilistic graphical models. The traditional Metropolis algorithm is a sequential algorithm. A classic result regarding the fast convergence of the Metropolis algorithm is: when the Dobrushin- Shlosman condition for the Metropolis algorithm is satisfied, the algorithm converges rapidly within O(n log n) step, where n is the number of variables. This paper studies a distributed variant of the Metropolis algorithm, called the local-Metropolis algorithm. We provide an analysis of the correctness and convergence of this new algorithm, and show: the algorithm always converges to the correct Gibbs distribution; moreover, for a natural class of triangle-free probabilistic graphical models, as long as the same Dobrushin-Shlosman condition is satisfied, the local-Metropolis algorithm converges within O(log n) rounds of distributed computing. Compared to the traditional sequential Metropolis algorithm, this achieves an asymptotically optimal Ω(n) factor of speed-ups. Concrete applications include the distributed sampling algorithms for graph coloring, hardcore model, and Ising model. |
Persistent Identifier | http://hdl.handle.net/10722/355015 |
ISSN | 2023 SCImago Journal Rankings: 0.310 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Feng, Weiming | - |
dc.contributor.author | Yi, Yitong | - |
dc.date.accessioned | 2025-03-21T09:10:37Z | - |
dc.date.available | 2025-03-21T09:10:37Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | Scientia Sinica Informationis, 2022, v. 52, n. 2, p. 287-313 | - |
dc.identifier.issn | 1674-7267 | - |
dc.identifier.uri | http://hdl.handle.net/10722/355015 | - |
dc.description.abstract | The Metropolis algorithm is a fundamental Markov chain Monte Carlo (MCMC) sampling technique, which can be used for drawing random samples from high-dimensional probability distributions (i.e., Gibbs distributions) represented by probabilistic graphical models. The traditional Metropolis algorithm is a sequential algorithm. A classic result regarding the fast convergence of the Metropolis algorithm is: when the Dobrushin- Shlosman condition for the Metropolis algorithm is satisfied, the algorithm converges rapidly within O(n log n) step, where n is the number of variables. This paper studies a distributed variant of the Metropolis algorithm, called the local-Metropolis algorithm. We provide an analysis of the correctness and convergence of this new algorithm, and show: the algorithm always converges to the correct Gibbs distribution; moreover, for a natural class of triangle-free probabilistic graphical models, as long as the same Dobrushin-Shlosman condition is satisfied, the local-Metropolis algorithm converges within O(log n) rounds of distributed computing. Compared to the traditional sequential Metropolis algorithm, this achieves an asymptotically optimal Ω(n) factor of speed-ups. Concrete applications include the distributed sampling algorithms for graph coloring, hardcore model, and Ising model. | - |
dc.language | eng | - |
dc.relation.ispartof | Scientia Sinica Informationis | - |
dc.subject | Coupling | - |
dc.subject | Distributed sampling | - |
dc.subject | Markov chain Monte Carlo | - |
dc.subject | Mixing time | - |
dc.subject | Spin system | - |
dc.title | Distributed Metropolis algorithm: convergence condition and optimal parallel speed-up | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1360/SSI-2021-0127 | - |
dc.identifier.scopus | eid_2-s2.0-85127464169 | - |
dc.identifier.volume | 52 | - |
dc.identifier.issue | 2 | - |
dc.identifier.spage | 287 | - |
dc.identifier.epage | 313 | - |
dc.identifier.eissn | 2095-9486 | - |