File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3506860.3506881
- WOS: WOS:000883327600044
Supplementary
-
Citations:
- Web of Science: 0
- Appears in Collections:
Conference Paper: Towards Multi-modal Evaluation of Eye-tracked Virtual Heritage Environment
Title | Towards Multi-modal Evaluation of Eye-tracked Virtual Heritage Environment |
---|---|
Authors | |
Issue Date | 2022 |
Publisher | Association for Computing Machinery. |
Citation | LAK22: 12th International Learning Analytics and Knowledge Conference (Online), USA, March 21-25, 2022. In LAK22 conference proceedings: the Twelfth International Conference on Learning Analytics & Knowledge : learning analytics for transition, disruption and social change : March 21-25, 2022, Online, Everywhere, p. 451-457 How to Cite? |
Abstract | In times of pandemic-induced challenges, virtual reality (VR) allows audience to learn about cultural heritage sites without temporal and spatial constraints. The design of VR content is largely determined by professionals, while evaluations of content often rely on learners’ self-report data. Learners’ attentional focus and understanding of VR content might be affected by the presence or absence of different multimedia elements including text and audio-visuals. It remains an open question which design variations are more conducive for learning about heritage sites. Leveraging eye-tracking, a technology often adopted in recent multimodal learning analytics (MmLA) research, we conducted an experiment to collect and analyze 40 learners’ eye movement and self-reported data. Results of statistical tests and heatmap elicitation interviews indicate that 1) text in the VR environment helped learners better understand the presented heritage sites, regardless of having audio narration or not, 2) text diverted learners’ attention away from other visual elements that contextualized the heritage sites, 3) exclusively having audio narration best simulated the experience of a real-world heritage tour, 4) narration accompanying text prompted learners to read the text faster. We make recommendations for improving the design of VR learning materials and discuss the implications for MmLA research. |
Persistent Identifier | http://hdl.handle.net/10722/316895 |
ISBN | |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ng, TD | - |
dc.contributor.author | Hu, X | - |
dc.contributor.author | Que, Y | - |
dc.date.accessioned | 2022-09-16T07:25:13Z | - |
dc.date.available | 2022-09-16T07:25:13Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | LAK22: 12th International Learning Analytics and Knowledge Conference (Online), USA, March 21-25, 2022. In LAK22 conference proceedings: the Twelfth International Conference on Learning Analytics & Knowledge : learning analytics for transition, disruption and social change : March 21-25, 2022, Online, Everywhere, p. 451-457 | - |
dc.identifier.isbn | 9781450395731 | - |
dc.identifier.uri | http://hdl.handle.net/10722/316895 | - |
dc.description.abstract | In times of pandemic-induced challenges, virtual reality (VR) allows audience to learn about cultural heritage sites without temporal and spatial constraints. The design of VR content is largely determined by professionals, while evaluations of content often rely on learners’ self-report data. Learners’ attentional focus and understanding of VR content might be affected by the presence or absence of different multimedia elements including text and audio-visuals. It remains an open question which design variations are more conducive for learning about heritage sites. Leveraging eye-tracking, a technology often adopted in recent multimodal learning analytics (MmLA) research, we conducted an experiment to collect and analyze 40 learners’ eye movement and self-reported data. Results of statistical tests and heatmap elicitation interviews indicate that 1) text in the VR environment helped learners better understand the presented heritage sites, regardless of having audio narration or not, 2) text diverted learners’ attention away from other visual elements that contextualized the heritage sites, 3) exclusively having audio narration best simulated the experience of a real-world heritage tour, 4) narration accompanying text prompted learners to read the text faster. We make recommendations for improving the design of VR learning materials and discuss the implications for MmLA research. | - |
dc.language | eng | - |
dc.publisher | Association for Computing Machinery. | - |
dc.relation.ispartof | LAK22 conference proceedings: the Twelfth International Conference on Learning Analytics & Knowledge : learning analytics for transition, disruption and social change : March 21-25, 2022, Online, Everywhere | - |
dc.rights | LAK22 conference proceedings: the Twelfth International Conference on Learning Analytics & Knowledge : learning analytics for transition, disruption and social change : March 21-25, 2022, Online, Everywhere. Copyright © Association for Computing Machinery. | - |
dc.title | Towards Multi-modal Evaluation of Eye-tracked Virtual Heritage Environment | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Hu, X: xiaoxhu@hku.hk | - |
dc.identifier.authority | Hu, X=rp01711 | - |
dc.identifier.doi | 10.1145/3506860.3506881 | - |
dc.identifier.hkuros | 336564 | - |
dc.identifier.spage | 451 | - |
dc.identifier.epage | 457 | - |
dc.identifier.isi | WOS:000883327600044 | - |
dc.publisher.place | United States | - |