File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Towards Multi-modal Evaluation of Eye-tracked Virtual Heritage Environment

TitleTowards Multi-modal Evaluation of Eye-tracked Virtual Heritage Environment
Authors
KeywordsCultural heritage
Eye-tracking
Multimedia learning
Multimodal learning analytics
Virtual reality
Issue Date2022
Citation
ACM International Conference Proceeding Series, 2022, p. 451-457 How to Cite?
AbstractIn times of pandemic-induced challenges, virtual reality (VR) allows audience to learn about cultural heritage sites without temporal and spatial constraints. The design of VR content is largely determined by professionals, while evaluations of content often rely on learners' self-report data. Learners' attentional focus and understanding of VR content might be affected by the presence or absence of different multimedia elements including text and audio-visuals. It remains an open question which design variations are more conducive for learning about heritage sites. Leveraging eye-tracking, a technology often adopted in recent multimodal learning analytics (MmLA) research, we conducted an experiment to collect and analyze 40 learners' eye movement and self-reported data. Results of statistical tests and heatmap elicitation interviews indicate that 1) text in the VR environment helped learners better understand the presented heritage sites, regardless of having audio narration or not, 2) text diverted learners' attention away from other visual elements that contextualized the heritage sites, 3) exclusively having audio narration best simulated the experience of a real-world heritage tour, 4) narration accompanying text prompted learners to read the text faster. We make recommendations for improving the design of VR learning materials and discuss the implications for MmLA research.
Persistent Identifierhttp://hdl.handle.net/10722/352277

 

DC FieldValueLanguage
dc.contributor.authorTzi-Dong Ng, Jeremy-
dc.contributor.authorHu, Xiao-
dc.contributor.authorQue, Ying-
dc.date.accessioned2024-12-16T03:57:45Z-
dc.date.available2024-12-16T03:57:45Z-
dc.date.issued2022-
dc.identifier.citationACM International Conference Proceeding Series, 2022, p. 451-457-
dc.identifier.urihttp://hdl.handle.net/10722/352277-
dc.description.abstractIn times of pandemic-induced challenges, virtual reality (VR) allows audience to learn about cultural heritage sites without temporal and spatial constraints. The design of VR content is largely determined by professionals, while evaluations of content often rely on learners' self-report data. Learners' attentional focus and understanding of VR content might be affected by the presence or absence of different multimedia elements including text and audio-visuals. It remains an open question which design variations are more conducive for learning about heritage sites. Leveraging eye-tracking, a technology often adopted in recent multimodal learning analytics (MmLA) research, we conducted an experiment to collect and analyze 40 learners' eye movement and self-reported data. Results of statistical tests and heatmap elicitation interviews indicate that 1) text in the VR environment helped learners better understand the presented heritage sites, regardless of having audio narration or not, 2) text diverted learners' attention away from other visual elements that contextualized the heritage sites, 3) exclusively having audio narration best simulated the experience of a real-world heritage tour, 4) narration accompanying text prompted learners to read the text faster. We make recommendations for improving the design of VR learning materials and discuss the implications for MmLA research.-
dc.languageeng-
dc.relation.ispartofACM International Conference Proceeding Series-
dc.subjectCultural heritage-
dc.subjectEye-tracking-
dc.subjectMultimedia learning-
dc.subjectMultimodal learning analytics-
dc.subjectVirtual reality-
dc.titleTowards Multi-modal Evaluation of Eye-tracked Virtual Heritage Environment-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3506860.3506881-
dc.identifier.scopuseid_2-s2.0-85126215509-
dc.identifier.spage451-
dc.identifier.epage457-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats