File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: What can be sampled locally?

TitleWhat can be sampled locally?
Authors
KeywordsDistributed sampling algorithms
Gibbs sampling
Local computation
LOCAL model
Markov chain Monte Carlo
Issue Date2017
Citation
Proceedings of the Annual ACM Symposium on Principles of Distributed Computing, 2017, v. Part F129314, p. 121-130 How to Cite?
AbstractThe local computation of Linial [FOCS'87] and Naor and Stockmeyer [STOC'93] concerns with the question of whether a locally definable distributed computing problem can be solved locally: more specifically, for a given local CSP (Constraint Satisfaction Problem) whether a CSP solution can be constructed by a distributed algorithm using local information. In this paper, we consider the problem of sampling a uniform CSP solution by distributed algorithms, and ask whether a locally definable joint distribution can be sampled from locally. More broadly, we consider sampling from Gibbs distributions induced by weighted local CSPs, especially the Markov random fields (MRFs), in the LOCAL model. We give two Markov chain based distributed algorithms which we believe to represent two fundamental approaches for sampling from Gibbs distributions via distributed algorithms. The first algorithm generically parallelizes the single-site sequential Markov chain by updating in each step the variables from a random independent set in parallel, and achieves an O(Δ log n) time upper bound in the LOCAL model, where Δ is the maximum degree, when the Dobrushin's condition for the Gibbs distribution is satisfied. The second algorithm is a novel parallel Markov chain which proposes to update all variables simultaneously yet still guarantees to converge correctly with no bias. It surprisingly parallelizes an intrinsically sequential process: stabilizing to a joint distribution with massive local dependencies, and may achieve an optimal O(log n) time upper bound independent of the maximum degree Δ under a stronger mixing condition. We also show a strong Ω(diam) lower bound for sampling: in particular for sampling independent set in graphs with maximum degree Δ ≥ 6. Independent sets are trivial to construct locally and the sampling lower bound holds even when every node is aware of the entire graph. This gives a strong separation between sampling and constructing locally checkable labelings.
Persistent Identifierhttp://hdl.handle.net/10722/354969

 

DC FieldValueLanguage
dc.contributor.authorFeng, Weiming-
dc.contributor.authorSun, Yuxin-
dc.contributor.authorYin, Yitong-
dc.date.accessioned2025-03-21T09:10:21Z-
dc.date.available2025-03-21T09:10:21Z-
dc.date.issued2017-
dc.identifier.citationProceedings of the Annual ACM Symposium on Principles of Distributed Computing, 2017, v. Part F129314, p. 121-130-
dc.identifier.urihttp://hdl.handle.net/10722/354969-
dc.description.abstractThe local computation of Linial [FOCS'87] and Naor and Stockmeyer [STOC'93] concerns with the question of whether a locally definable distributed computing problem can be solved locally: more specifically, for a given local CSP (Constraint Satisfaction Problem) whether a CSP solution can be constructed by a distributed algorithm using local information. In this paper, we consider the problem of sampling a uniform CSP solution by distributed algorithms, and ask whether a locally definable joint distribution can be sampled from locally. More broadly, we consider sampling from Gibbs distributions induced by weighted local CSPs, especially the Markov random fields (MRFs), in the LOCAL model. We give two Markov chain based distributed algorithms which we believe to represent two fundamental approaches for sampling from Gibbs distributions via distributed algorithms. The first algorithm generically parallelizes the single-site sequential Markov chain by updating in each step the variables from a random independent set in parallel, and achieves an O(Δ log n) time upper bound in the LOCAL model, where Δ is the maximum degree, when the Dobrushin's condition for the Gibbs distribution is satisfied. The second algorithm is a novel parallel Markov chain which proposes to update all variables simultaneously yet still guarantees to converge correctly with no bias. It surprisingly parallelizes an intrinsically sequential process: stabilizing to a joint distribution with massive local dependencies, and may achieve an optimal O(log n) time upper bound independent of the maximum degree Δ under a stronger mixing condition. We also show a strong Ω(diam) lower bound for sampling: in particular for sampling independent set in graphs with maximum degree Δ ≥ 6. Independent sets are trivial to construct locally and the sampling lower bound holds even when every node is aware of the entire graph. This gives a strong separation between sampling and constructing locally checkable labelings.-
dc.languageeng-
dc.relation.ispartofProceedings of the Annual ACM Symposium on Principles of Distributed Computing-
dc.subjectDistributed sampling algorithms-
dc.subjectGibbs sampling-
dc.subjectLocal computation-
dc.subjectLOCAL model-
dc.subjectMarkov chain Monte Carlo-
dc.titleWhat can be sampled locally?-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3087801.3087815-
dc.identifier.scopuseid_2-s2.0-85027891279-
dc.identifier.volumePart F129314-
dc.identifier.spage121-
dc.identifier.epage130-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats