File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Mental sampling in multimodal representations

TitleMental sampling in multimodal representations
Authors
Issue Date2018
Citation
Advances in Neural Information Processing Systems, 2018, v. 2018-December, p. 5748-5759 How to Cite?
AbstractBoth resources in the natural environment and concepts in a semantic space are distributed “patchily”, with large gaps in between the patches. To describe people's internal and external foraging behavior, various random walk models have been proposed. In particular, internal foraging has been modeled as sampling: in order to gather relevant information for making a decision, people draw samples from a mental representation using random-walk algorithms such as Markov chain Monte Carlo (MCMC). However, two common empirical observations argue against people using simple sampling algorithms such as MCMC for internal foraging. First, the distance between samples is often best described by a Lévy flight distribution: the probability of the distance between two successive locations follows a power-law on the distances. Second, humans and other animals produce long-range, slowly decaying autocorrelations characterized as 1/f-like fluctuations, instead of the 1/f2 fluctuations produced by random walks. We propose that mental sampling is not done by simple MCMC, but is instead adapted to multimodal representations and is implemented by Metropolis-coupled Markov chain Monte Carlo (MC3), one of the first algorithms developed for sampling from multimodal distributions. MC3 involves running multiple Markov chains in parallel but with target distributions of different temperatures, and it swaps the states of the chains whenever a better location is found. Heated chains more readily traverse valleys in the probability landscape to propose moves to far-away peaks, while the colder chains make the local steps that explore the current peak or patch. We show that MC3 generates distances between successive samples that follow a Lévy flight distribution and produce 1/f-like autocorrelations, providing a single mechanistic account of these two puzzling empirical phenomena of internal foraging.
Persistent Identifierhttp://hdl.handle.net/10722/367808
ISSN
2020 SCImago Journal Rankings: 1.399

 

DC FieldValueLanguage
dc.contributor.authorZhu, Jian Qiao-
dc.contributor.authorSanborn, Adam N.-
dc.contributor.authorChater, Nick-
dc.date.accessioned2025-12-19T07:59:30Z-
dc.date.available2025-12-19T07:59:30Z-
dc.date.issued2018-
dc.identifier.citationAdvances in Neural Information Processing Systems, 2018, v. 2018-December, p. 5748-5759-
dc.identifier.issn1049-5258-
dc.identifier.urihttp://hdl.handle.net/10722/367808-
dc.description.abstractBoth resources in the natural environment and concepts in a semantic space are distributed “patchily”, with large gaps in between the patches. To describe people's internal and external foraging behavior, various random walk models have been proposed. In particular, internal foraging has been modeled as sampling: in order to gather relevant information for making a decision, people draw samples from a mental representation using random-walk algorithms such as Markov chain Monte Carlo (MCMC). However, two common empirical observations argue against people using simple sampling algorithms such as MCMC for internal foraging. First, the distance between samples is often best described by a Lévy flight distribution: the probability of the distance between two successive locations follows a power-law on the distances. Second, humans and other animals produce long-range, slowly decaying autocorrelations characterized as 1/f-like fluctuations, instead of the 1/f<sup>2</sup> fluctuations produced by random walks. We propose that mental sampling is not done by simple MCMC, but is instead adapted to multimodal representations and is implemented by Metropolis-coupled Markov chain Monte Carlo (MC<sup>3</sup>), one of the first algorithms developed for sampling from multimodal distributions. MC<sup>3</sup> involves running multiple Markov chains in parallel but with target distributions of different temperatures, and it swaps the states of the chains whenever a better location is found. Heated chains more readily traverse valleys in the probability landscape to propose moves to far-away peaks, while the colder chains make the local steps that explore the current peak or patch. We show that MC<sup>3</sup> generates distances between successive samples that follow a Lévy flight distribution and produce 1/f-like autocorrelations, providing a single mechanistic account of these two puzzling empirical phenomena of internal foraging.-
dc.languageeng-
dc.relation.ispartofAdvances in Neural Information Processing Systems-
dc.titleMental sampling in multimodal representations-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85064847948-
dc.identifier.volume2018-December-
dc.identifier.spage5748-
dc.identifier.epage5759-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats