File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3506860.3506881
- Scopus: eid_2-s2.0-85126215509
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Towards Multi-modal Evaluation of Eye-tracked Virtual Heritage Environment
Title | Towards Multi-modal Evaluation of Eye-tracked Virtual Heritage Environment |
---|---|
Authors | |
Keywords | Cultural heritage Eye-tracking Multimedia learning Multimodal learning analytics Virtual reality |
Issue Date | 2022 |
Citation | ACM International Conference Proceeding Series, 2022, p. 451-457 How to Cite? |
Abstract | In times of pandemic-induced challenges, virtual reality (VR) allows audience to learn about cultural heritage sites without temporal and spatial constraints. The design of VR content is largely determined by professionals, while evaluations of content often rely on learners' self-report data. Learners' attentional focus and understanding of VR content might be affected by the presence or absence of different multimedia elements including text and audio-visuals. It remains an open question which design variations are more conducive for learning about heritage sites. Leveraging eye-tracking, a technology often adopted in recent multimodal learning analytics (MmLA) research, we conducted an experiment to collect and analyze 40 learners' eye movement and self-reported data. Results of statistical tests and heatmap elicitation interviews indicate that 1) text in the VR environment helped learners better understand the presented heritage sites, regardless of having audio narration or not, 2) text diverted learners' attention away from other visual elements that contextualized the heritage sites, 3) exclusively having audio narration best simulated the experience of a real-world heritage tour, 4) narration accompanying text prompted learners to read the text faster. We make recommendations for improving the design of VR learning materials and discuss the implications for MmLA research. |
Persistent Identifier | http://hdl.handle.net/10722/352277 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Tzi-Dong Ng, Jeremy | - |
dc.contributor.author | Hu, Xiao | - |
dc.contributor.author | Que, Ying | - |
dc.date.accessioned | 2024-12-16T03:57:45Z | - |
dc.date.available | 2024-12-16T03:57:45Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | ACM International Conference Proceeding Series, 2022, p. 451-457 | - |
dc.identifier.uri | http://hdl.handle.net/10722/352277 | - |
dc.description.abstract | In times of pandemic-induced challenges, virtual reality (VR) allows audience to learn about cultural heritage sites without temporal and spatial constraints. The design of VR content is largely determined by professionals, while evaluations of content often rely on learners' self-report data. Learners' attentional focus and understanding of VR content might be affected by the presence or absence of different multimedia elements including text and audio-visuals. It remains an open question which design variations are more conducive for learning about heritage sites. Leveraging eye-tracking, a technology often adopted in recent multimodal learning analytics (MmLA) research, we conducted an experiment to collect and analyze 40 learners' eye movement and self-reported data. Results of statistical tests and heatmap elicitation interviews indicate that 1) text in the VR environment helped learners better understand the presented heritage sites, regardless of having audio narration or not, 2) text diverted learners' attention away from other visual elements that contextualized the heritage sites, 3) exclusively having audio narration best simulated the experience of a real-world heritage tour, 4) narration accompanying text prompted learners to read the text faster. We make recommendations for improving the design of VR learning materials and discuss the implications for MmLA research. | - |
dc.language | eng | - |
dc.relation.ispartof | ACM International Conference Proceeding Series | - |
dc.subject | Cultural heritage | - |
dc.subject | Eye-tracking | - |
dc.subject | Multimedia learning | - |
dc.subject | Multimodal learning analytics | - |
dc.subject | Virtual reality | - |
dc.title | Towards Multi-modal Evaluation of Eye-tracked Virtual Heritage Environment | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1145/3506860.3506881 | - |
dc.identifier.scopus | eid_2-s2.0-85126215509 | - |
dc.identifier.spage | 451 | - |
dc.identifier.epage | 457 | - |