File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: People judge others more harshly after talking to bots

TitlePeople judge others more harshly after talking to bots
Authors
Keywordsartificial intelligence
human-AI interaction
judgment
spillover
Issue Date1-Sep-2024
PublisherNational Academy of Sciences
Citation
PNAS Nexus, 2024, v. 3, n. 9 How to Cite?
AbstractPeople now commonly interact with Artificial Intelligence (AI) agents. How do these interactions shape how humans perceive each other? In two preregistered studies (total N = 1,261), we show that people evaluate other humans more harshly after interacting with an AI (compared with an unrelated purported human). In Study 1, participants who worked on a creative task with AIs (versus purported humans) subsequently rated another purported human's work more negatively. Study 2 replicated this effect and demonstrated that the results hold even when participants believed their evaluation would not be shared with the purported human. Exploratory analyses of participants' conversations show that prior to their human evaluations they were more demanding, more instrumental and displayed less positive affect towards AIs (versus purported humans). These findings point to a potentially worrisome side effect of the exponential rise in human-AI interactions.
Persistent Identifierhttp://hdl.handle.net/10722/366297

 

DC FieldValueLanguage
dc.contributor.authorTey, Kian Siong-
dc.contributor.authorMazar, Asaf-
dc.contributor.authorTomaino, Geoff-
dc.contributor.authorDuckworth, Angela L-
dc.contributor.authorUngar, Lyle H-
dc.date.accessioned2025-11-25T04:18:37Z-
dc.date.available2025-11-25T04:18:37Z-
dc.date.issued2024-09-01-
dc.identifier.citationPNAS Nexus, 2024, v. 3, n. 9-
dc.identifier.urihttp://hdl.handle.net/10722/366297-
dc.description.abstractPeople now commonly interact with Artificial Intelligence (AI) agents. How do these interactions shape how humans perceive each other? In two preregistered studies (total N = 1,261), we show that people evaluate other humans more harshly after interacting with an AI (compared with an unrelated purported human). In Study 1, participants who worked on a creative task with AIs (versus purported humans) subsequently rated another purported human's work more negatively. Study 2 replicated this effect and demonstrated that the results hold even when participants believed their evaluation would not be shared with the purported human. Exploratory analyses of participants' conversations show that prior to their human evaluations they were more demanding, more instrumental and displayed less positive affect towards AIs (versus purported humans). These findings point to a potentially worrisome side effect of the exponential rise in human-AI interactions.-
dc.languageeng-
dc.publisherNational Academy of Sciences-
dc.relation.ispartofPNAS Nexus-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectartificial intelligence-
dc.subjecthuman-AI interaction-
dc.subjectjudgment-
dc.subjectspillover-
dc.titlePeople judge others more harshly after talking to bots-
dc.typeArticle-
dc.identifier.doi10.1093/pnasnexus/pgae397-
dc.identifier.scopuseid_2-s2.0-85205366976-
dc.identifier.volume3-
dc.identifier.issue9-
dc.identifier.eissn2752-6542-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats