File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Information Seeking as Chasing Anticipated Prediction Errors
| Title | Information Seeking as Chasing Anticipated Prediction Errors |
|---|---|
| Authors | |
| Keywords | anticipated prediction errors early resolution of uncertainty forward sampling information seeking |
| Issue Date | 2017 |
| Citation | Cogsci 2017 Proceedings of the 39th Annual Meeting of the Cognitive Science Society Computational Foundations of Cognition, 2017, p. 3658-3663 How to Cite? |
| Abstract | When faced with delayed, uncertain rewards, humans and other animals usually prefer to know the eventual outcomes in advance. This preference for cues providing advance information can lead to seemingly suboptimal choices, where less reward is preferred over more reward. Here, we introduce a reinforcement-learning model of this behavior, the anticipated prediction error (APE) model, based on the idea that prediction errors themselves can be rewarding. As a result, animals will sometimes pick options that yield large prediction errors, even when the expected rewards are smaller. We compare the APE model against an alternative information-bonus model, where information itself is viewed as rewarding. These models are evaluated against a newly collected dataset with human participants. The APE model fits the data as well or better than the other models, with fewer free parameters, thus providing a more robust and parsimonious account of the suboptimal choices. These results suggest that anticipated prediction errors can be an important signal underpinning decision making. |
| Persistent Identifier | http://hdl.handle.net/10722/367826 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Zhu, Jian Qiao | - |
| dc.contributor.author | Xiang, Wendi | - |
| dc.contributor.author | Ludvig, Elliot A. | - |
| dc.date.accessioned | 2025-12-19T07:59:39Z | - |
| dc.date.available | 2025-12-19T07:59:39Z | - |
| dc.date.issued | 2017 | - |
| dc.identifier.citation | Cogsci 2017 Proceedings of the 39th Annual Meeting of the Cognitive Science Society Computational Foundations of Cognition, 2017, p. 3658-3663 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/367826 | - |
| dc.description.abstract | When faced with delayed, uncertain rewards, humans and other animals usually prefer to know the eventual outcomes in advance. This preference for cues providing advance information can lead to seemingly suboptimal choices, where less reward is preferred over more reward. Here, we introduce a reinforcement-learning model of this behavior, the anticipated prediction error (APE) model, based on the idea that prediction errors themselves can be rewarding. As a result, animals will sometimes pick options that yield large prediction errors, even when the expected rewards are smaller. We compare the APE model against an alternative information-bonus model, where information itself is viewed as rewarding. These models are evaluated against a newly collected dataset with human participants. The APE model fits the data as well or better than the other models, with fewer free parameters, thus providing a more robust and parsimonious account of the suboptimal choices. These results suggest that anticipated prediction errors can be an important signal underpinning decision making. | - |
| dc.language | eng | - |
| dc.relation.ispartof | Cogsci 2017 Proceedings of the 39th Annual Meeting of the Cognitive Science Society Computational Foundations of Cognition | - |
| dc.subject | anticipated prediction errors | - |
| dc.subject | early resolution of uncertainty | - |
| dc.subject | forward sampling | - |
| dc.subject | information seeking | - |
| dc.title | Information Seeking as Chasing Anticipated Prediction Errors | - |
| dc.type | Conference_Paper | - |
| dc.description.nature | link_to_subscribed_fulltext | - |
| dc.identifier.scopus | eid_2-s2.0-85085842138 | - |
| dc.identifier.spage | 3658 | - |
| dc.identifier.epage | 3663 | - |
