File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Investigating the Differential Effects of Audio-Visual Information and Emotional Valence on Empathic Accuracy

TitleInvestigating the Differential Effects of Audio-Visual Information and Emotional Valence on Empathic Accuracy
Authors
Keywordsaffective empathy
audio-visual information
cognitive empathy
empathic accuracy
Issue Date2025
Citation
Journal of Psychological Science, 2025, v. 48, n. 3, p. 567-576 How to Cite?
AbstractBackground and Aims: Empathy involves the communication and understanding of social information between individuals in specific contexts. Empirical evidence suggests that auditory information can affect one's empathic ability more than visual information, but the differential effects of sensory modalities of information on empathic accuracy remain unclear. This study aimed to examine the effects of auditory and different visual modalities on empathic accuracy based on the Chinese version of the Empathic Accuracy Task (EAT). We hypothesized that (1) performance of cognitive empathy in avatar audio-video condition would be significantly lower than performance in the auditory-only and human audio-video conditions. (2) There was significant interaction between emotional valence and Modality-Condition in Cognitive empathy. Specifically, cognitive empathy was significantly higher in the human audio-video condition compared to the audio-only conditions for positive-valenced videos, while there was no significant difference among the three experimental conditions for negative-valenced videos. Method: We recruited 85 college students to complete the Chinese version of the EAT in three different conditions, i.e., (1) auditory-only condition, (2) avatar audio-video (visual information is less-than human audio-video condition) condition, and (3) human audio-video condition. The EAT had 12 video clips (6 positive and 6 negative) with a character describing his/her emotional autographical event in each video clip. Participants were asked to rate the character's emotional states continuously and to respond to questions concerning perspective taking, emotional contagion, empathic concern, and willingness/effort to help. Results: The 3 (Modality-Condition: auditory-only, avatar audio-video, and human audio-video) x 2 (Valence: positive and negative) ANOVA model found a significant Modality-Condition main effect on emotional contagion score (F(2, 168) = 3.08, p < .05), with the human audio-video condition (M = 7.01, SD = 1.26) eliciting higher degrees of emotional contagion than the avatar audio-video condition (M = 6.1 A, SD = 1.28). However, the Modality-Condition main effects on empathy accuracy and perspective taking scores were non-significant. The Valence main effects on empathic accuracy (F(\, 84) = 10.16,/> < .01), emotional contagion (F(\, 84) = 6.45,p < .05) and perspective taking (F(l, 84) = 14.01, p < .001) were significant. Empathic responses were enhanced in videos depicting positive moods relative to those depicting negative moods. The Modality-Condition-by-Valence interaction on perspective taking (F(2, 168) =7.57, p < .01) and emotional contagion (F(2, 168) = 6.48, p < .01) were significant. Simple effect analysis found that, for positive-valenced videos, both perspective taking and emotional contagion scores were significantly lower in the Avatar audio-video condition (M= 7.15, SD = 1.36; M= 6.69, SD = 1.53) compared to the audio-only (M= 7.59, SD = 1.03; M= 7.14, SD = 1.30) and human audio-video (M=7.57, SD = 1.26; M= 7.17, SD = 1.51) conditions. In contrast, for negatively valenced videos, emotional contagion was higher in the human audio-video condition (M = 6.84, SD = 1.44) than the audio-only condition (M= 6.52, SD = 1.35). However, the Modality-Condition-by-Valence interaction was not significant for empathy accuracy. Conclusions: This study investigated the impact of audio-visual information on empathy by comparing audio-only and human audio-video conditions, differentiating between positive and negative emotional valence. The findings highlighted that human facial expressions significantly enhance emotional empathy in negative emotional contexts when matched with auditory information. Additionally, by introducing human and avatar audio-video condition, the study manipulated different levels of visual information. Our findings suggested the impacts of visual information on empathy varied with emotional valence. The Avatar audio-video condition undermined empathy in positive-valenced scenarios. Together, our work elucidated the effects of emotional valence of visual information on empathy performance, implicating the role of human visual cues in empathy processing.
Persistent Identifierhttp://hdl.handle.net/10722/367870
ISSN

 

DC FieldValueLanguage
dc.contributor.authorWang, Miao-
dc.contributor.authorZhang, Liying-
dc.contributor.authorFu, Xinwei-
dc.contributor.authorWang, Yi-
dc.contributor.authorJiang, Yue-
dc.contributor.authorCao, Yuan-
dc.contributor.authorWang, Yanyu-
dc.contributor.authorChan, Raymond C.K.-
dc.date.accessioned2025-12-19T08:00:05Z-
dc.date.available2025-12-19T08:00:05Z-
dc.date.issued2025-
dc.identifier.citationJournal of Psychological Science, 2025, v. 48, n. 3, p. 567-576-
dc.identifier.issn1671-6981-
dc.identifier.urihttp://hdl.handle.net/10722/367870-
dc.description.abstractBackground and Aims: Empathy involves the communication and understanding of social information between individuals in specific contexts. Empirical evidence suggests that auditory information can affect one's empathic ability more than visual information, but the differential effects of sensory modalities of information on empathic accuracy remain unclear. This study aimed to examine the effects of auditory and different visual modalities on empathic accuracy based on the Chinese version of the Empathic Accuracy Task (EAT). We hypothesized that (1) performance of cognitive empathy in avatar audio-video condition would be significantly lower than performance in the auditory-only and human audio-video conditions. (2) There was significant interaction between emotional valence and Modality-Condition in Cognitive empathy. Specifically, cognitive empathy was significantly higher in the human audio-video condition compared to the audio-only conditions for positive-valenced videos, while there was no significant difference among the three experimental conditions for negative-valenced videos. Method: We recruited 85 college students to complete the Chinese version of the EAT in three different conditions, i.e., (1) auditory-only condition, (2) avatar audio-video (visual information is less-than human audio-video condition) condition, and (3) human audio-video condition. The EAT had 12 video clips (6 positive and 6 negative) with a character describing his/her emotional autographical event in each video clip. Participants were asked to rate the character's emotional states continuously and to respond to questions concerning perspective taking, emotional contagion, empathic concern, and willingness/effort to help. Results: The 3 (Modality-Condition: auditory-only, avatar audio-video, and human audio-video) x 2 (Valence: positive and negative) ANOVA model found a significant Modality-Condition main effect on emotional contagion score (F(2, 168) = 3.08, p < .05), with the human audio-video condition (M = 7.01, SD = 1.26) eliciting higher degrees of emotional contagion than the avatar audio-video condition (M = 6.1 A, SD = 1.28). However, the Modality-Condition main effects on empathy accuracy and perspective taking scores were non-significant. The Valence main effects on empathic accuracy (F(\, 84) = 10.16,/> < .01), emotional contagion (F(\, 84) = 6.45,p < .05) and perspective taking (F(l, 84) = 14.01, p < .001) were significant. Empathic responses were enhanced in videos depicting positive moods relative to those depicting negative moods. The Modality-Condition-by-Valence interaction on perspective taking (F(2, 168) =7.57, p < .01) and emotional contagion (F(2, 168) = 6.48, p < .01) were significant. Simple effect analysis found that, for positive-valenced videos, both perspective taking and emotional contagion scores were significantly lower in the Avatar audio-video condition (M= 7.15, SD = 1.36; M= 6.69, SD = 1.53) compared to the audio-only (M= 7.59, SD = 1.03; M= 7.14, SD = 1.30) and human audio-video (M=7.57, SD = 1.26; M= 7.17, SD = 1.51) conditions. In contrast, for negatively valenced videos, emotional contagion was higher in the human audio-video condition (M = 6.84, SD = 1.44) than the audio-only condition (M= 6.52, SD = 1.35). However, the Modality-Condition-by-Valence interaction was not significant for empathy accuracy. Conclusions: This study investigated the impact of audio-visual information on empathy by comparing audio-only and human audio-video conditions, differentiating between positive and negative emotional valence. The findings highlighted that human facial expressions significantly enhance emotional empathy in negative emotional contexts when matched with auditory information. Additionally, by introducing human and avatar audio-video condition, the study manipulated different levels of visual information. Our findings suggested the impacts of visual information on empathy varied with emotional valence. The Avatar audio-video condition undermined empathy in positive-valenced scenarios. Together, our work elucidated the effects of emotional valence of visual information on empathy performance, implicating the role of human visual cues in empathy processing.-
dc.languageeng-
dc.relation.ispartofJournal of Psychological Science-
dc.subjectaffective empathy-
dc.subjectaudio-visual information-
dc.subjectcognitive empathy-
dc.subjectempathic accuracy-
dc.titleInvestigating the Differential Effects of Audio-Visual Information and Emotional Valence on Empathic Accuracy-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.16719/j.cnki.1671-6981.20250306-
dc.identifier.scopuseid_2-s2.0-105018772178-
dc.identifier.volume48-
dc.identifier.issue3-
dc.identifier.spage567-
dc.identifier.epage576-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats