File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3087801.3087815
- Scopus: eid_2-s2.0-85027891279
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: What can be sampled locally?
Title | What can be sampled locally? |
---|---|
Authors | |
Keywords | Distributed sampling algorithms Gibbs sampling Local computation LOCAL model Markov chain Monte Carlo |
Issue Date | 2017 |
Citation | Proceedings of the Annual ACM Symposium on Principles of Distributed Computing, 2017, v. Part F129314, p. 121-130 How to Cite? |
Abstract | The local computation of Linial [FOCS'87] and Naor and Stockmeyer [STOC'93] concerns with the question of whether a locally definable distributed computing problem can be solved locally: more specifically, for a given local CSP (Constraint Satisfaction Problem) whether a CSP solution can be constructed by a distributed algorithm using local information. In this paper, we consider the problem of sampling a uniform CSP solution by distributed algorithms, and ask whether a locally definable joint distribution can be sampled from locally. More broadly, we consider sampling from Gibbs distributions induced by weighted local CSPs, especially the Markov random fields (MRFs), in the LOCAL model. We give two Markov chain based distributed algorithms which we believe to represent two fundamental approaches for sampling from Gibbs distributions via distributed algorithms. The first algorithm generically parallelizes the single-site sequential Markov chain by updating in each step the variables from a random independent set in parallel, and achieves an O(Δ log n) time upper bound in the LOCAL model, where Δ is the maximum degree, when the Dobrushin's condition for the Gibbs distribution is satisfied. The second algorithm is a novel parallel Markov chain which proposes to update all variables simultaneously yet still guarantees to converge correctly with no bias. It surprisingly parallelizes an intrinsically sequential process: stabilizing to a joint distribution with massive local dependencies, and may achieve an optimal O(log n) time upper bound independent of the maximum degree Δ under a stronger mixing condition. We also show a strong Ω(diam) lower bound for sampling: in particular for sampling independent set in graphs with maximum degree Δ ≥ 6. Independent sets are trivial to construct locally and the sampling lower bound holds even when every node is aware of the entire graph. This gives a strong separation between sampling and constructing locally checkable labelings. |
Persistent Identifier | http://hdl.handle.net/10722/354969 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Feng, Weiming | - |
dc.contributor.author | Sun, Yuxin | - |
dc.contributor.author | Yin, Yitong | - |
dc.date.accessioned | 2025-03-21T09:10:21Z | - |
dc.date.available | 2025-03-21T09:10:21Z | - |
dc.date.issued | 2017 | - |
dc.identifier.citation | Proceedings of the Annual ACM Symposium on Principles of Distributed Computing, 2017, v. Part F129314, p. 121-130 | - |
dc.identifier.uri | http://hdl.handle.net/10722/354969 | - |
dc.description.abstract | The local computation of Linial [FOCS'87] and Naor and Stockmeyer [STOC'93] concerns with the question of whether a locally definable distributed computing problem can be solved locally: more specifically, for a given local CSP (Constraint Satisfaction Problem) whether a CSP solution can be constructed by a distributed algorithm using local information. In this paper, we consider the problem of sampling a uniform CSP solution by distributed algorithms, and ask whether a locally definable joint distribution can be sampled from locally. More broadly, we consider sampling from Gibbs distributions induced by weighted local CSPs, especially the Markov random fields (MRFs), in the LOCAL model. We give two Markov chain based distributed algorithms which we believe to represent two fundamental approaches for sampling from Gibbs distributions via distributed algorithms. The first algorithm generically parallelizes the single-site sequential Markov chain by updating in each step the variables from a random independent set in parallel, and achieves an O(Δ log n) time upper bound in the LOCAL model, where Δ is the maximum degree, when the Dobrushin's condition for the Gibbs distribution is satisfied. The second algorithm is a novel parallel Markov chain which proposes to update all variables simultaneously yet still guarantees to converge correctly with no bias. It surprisingly parallelizes an intrinsically sequential process: stabilizing to a joint distribution with massive local dependencies, and may achieve an optimal O(log n) time upper bound independent of the maximum degree Δ under a stronger mixing condition. We also show a strong Ω(diam) lower bound for sampling: in particular for sampling independent set in graphs with maximum degree Δ ≥ 6. Independent sets are trivial to construct locally and the sampling lower bound holds even when every node is aware of the entire graph. This gives a strong separation between sampling and constructing locally checkable labelings. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the Annual ACM Symposium on Principles of Distributed Computing | - |
dc.subject | Distributed sampling algorithms | - |
dc.subject | Gibbs sampling | - |
dc.subject | Local computation | - |
dc.subject | LOCAL model | - |
dc.subject | Markov chain Monte Carlo | - |
dc.title | What can be sampled locally? | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1145/3087801.3087815 | - |
dc.identifier.scopus | eid_2-s2.0-85027891279 | - |
dc.identifier.volume | Part F129314 | - |
dc.identifier.spage | 121 | - |
dc.identifier.epage | 130 | - |