File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Towards Multi-modal Evaluation of Eye-tracked Virtual Heritage Environment

TitleTowards Multi-modal Evaluation of Eye-tracked Virtual Heritage Environment
Authors
Issue Date2022
PublisherAssociation for Computing Machinery.
Citation
LAK22: 12th International Learning Analytics and Knowledge Conference (Online), USA, March 21-25, 2022. In LAK22 conference proceedings: the Twelfth International Conference on Learning Analytics & Knowledge : learning analytics for transition, disruption and social change : March 21-25, 2022, Online, Everywhere, p. 451-457 How to Cite?
AbstractIn times of pandemic-induced challenges, virtual reality (VR) allows audience to learn about cultural heritage sites without temporal and spatial constraints. The design of VR content is largely determined by professionals, while evaluations of content often rely on learners’ self-report data. Learners’ attentional focus and understanding of VR content might be affected by the presence or absence of different multimedia elements including text and audio-visuals. It remains an open question which design variations are more conducive for learning about heritage sites. Leveraging eye-tracking, a technology often adopted in recent multimodal learning analytics (MmLA) research, we conducted an experiment to collect and analyze 40 learners’ eye movement and self-reported data. Results of statistical tests and heatmap elicitation interviews indicate that 1) text in the VR environment helped learners better understand the presented heritage sites, regardless of having audio narration or not, 2) text diverted learners’ attention away from other visual elements that contextualized the heritage sites, 3) exclusively having audio narration best simulated the experience of a real-world heritage tour, 4) narration accompanying text prompted learners to read the text faster. We make recommendations for improving the design of VR learning materials and discuss the implications for MmLA research.
Persistent Identifierhttp://hdl.handle.net/10722/316895
ISBN
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorNg, TD-
dc.contributor.authorHu, X-
dc.contributor.authorQue, Y-
dc.date.accessioned2022-09-16T07:25:13Z-
dc.date.available2022-09-16T07:25:13Z-
dc.date.issued2022-
dc.identifier.citationLAK22: 12th International Learning Analytics and Knowledge Conference (Online), USA, March 21-25, 2022. In LAK22 conference proceedings: the Twelfth International Conference on Learning Analytics & Knowledge : learning analytics for transition, disruption and social change : March 21-25, 2022, Online, Everywhere, p. 451-457-
dc.identifier.isbn9781450395731-
dc.identifier.urihttp://hdl.handle.net/10722/316895-
dc.description.abstractIn times of pandemic-induced challenges, virtual reality (VR) allows audience to learn about cultural heritage sites without temporal and spatial constraints. The design of VR content is largely determined by professionals, while evaluations of content often rely on learners’ self-report data. Learners’ attentional focus and understanding of VR content might be affected by the presence or absence of different multimedia elements including text and audio-visuals. It remains an open question which design variations are more conducive for learning about heritage sites. Leveraging eye-tracking, a technology often adopted in recent multimodal learning analytics (MmLA) research, we conducted an experiment to collect and analyze 40 learners’ eye movement and self-reported data. Results of statistical tests and heatmap elicitation interviews indicate that 1) text in the VR environment helped learners better understand the presented heritage sites, regardless of having audio narration or not, 2) text diverted learners’ attention away from other visual elements that contextualized the heritage sites, 3) exclusively having audio narration best simulated the experience of a real-world heritage tour, 4) narration accompanying text prompted learners to read the text faster. We make recommendations for improving the design of VR learning materials and discuss the implications for MmLA research.-
dc.languageeng-
dc.publisherAssociation for Computing Machinery.-
dc.relation.ispartofLAK22 conference proceedings: the Twelfth International Conference on Learning Analytics & Knowledge : learning analytics for transition, disruption and social change : March 21-25, 2022, Online, Everywhere-
dc.rightsLAK22 conference proceedings: the Twelfth International Conference on Learning Analytics & Knowledge : learning analytics for transition, disruption and social change : March 21-25, 2022, Online, Everywhere. Copyright © Association for Computing Machinery.-
dc.titleTowards Multi-modal Evaluation of Eye-tracked Virtual Heritage Environment-
dc.typeConference_Paper-
dc.identifier.emailHu, X: xiaoxhu@hku.hk-
dc.identifier.authorityHu, X=rp01711-
dc.identifier.doi10.1145/3506860.3506881-
dc.identifier.hkuros336564-
dc.identifier.spage451-
dc.identifier.epage457-
dc.identifier.isiWOS:000883327600044-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats