File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3657604.3664720
- Scopus: eid_2-s2.0-85199913338
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: MuFIN: A Framework for Automating Multimodal Feedback Generation using Generative Artificial Intelligence
Title | MuFIN: A Framework for Automating Multimodal Feedback Generation using Generative Artificial Intelligence |
---|---|
Authors | |
Keywords | generative artificial intelligence large language models multimodal feedback |
Issue Date | 2024 |
Citation | L@S 2024 - Proceedings of the 11th ACM Conference on Learning @ Scale, 2024, p. 550-552 How to Cite? |
Abstract | Written feedback has long been a cornerstone in educational and professional settings, essential for enhancing learning outcomes. However, multimodal feedback-integrating textual, auditory, and visual cues-promises a more engaging and effective learning experience. By leveraging multiple sensory channels, multimodal feedback better accommodates diverse learning preferences and aids in deeper information retention. Despite its potential, creating multimodal feedback poses challenges, including the need for increased time and resources. Recent advancements in generative artificial intelligence (GenAI) offer solutions to automate the feedback process, predominantly focusing on textual feedback. Yet, the application of GenAI in generating multimodal feedback remains largely unexplored. Our study investigates the use of GenAI techniques to generate multimodal feedback, aiming to provide this feedback for large cohorts of learners, thereby enhancing learning experience and engagement. By exploring the potential of GenAI for this purpose, we propose a framework for automating the generation of multimodal feedback, which we name MuFIN. |
Persistent Identifier | http://hdl.handle.net/10722/354346 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lin, Jionghao | - |
dc.contributor.author | Chen, Eason | - |
dc.contributor.author | Gurung, Ashish | - |
dc.contributor.author | Koedinger, Kenneth R. | - |
dc.date.accessioned | 2025-02-07T08:48:02Z | - |
dc.date.available | 2025-02-07T08:48:02Z | - |
dc.date.issued | 2024 | - |
dc.identifier.citation | L@S 2024 - Proceedings of the 11th ACM Conference on Learning @ Scale, 2024, p. 550-552 | - |
dc.identifier.uri | http://hdl.handle.net/10722/354346 | - |
dc.description.abstract | Written feedback has long been a cornerstone in educational and professional settings, essential for enhancing learning outcomes. However, multimodal feedback-integrating textual, auditory, and visual cues-promises a more engaging and effective learning experience. By leveraging multiple sensory channels, multimodal feedback better accommodates diverse learning preferences and aids in deeper information retention. Despite its potential, creating multimodal feedback poses challenges, including the need for increased time and resources. Recent advancements in generative artificial intelligence (GenAI) offer solutions to automate the feedback process, predominantly focusing on textual feedback. Yet, the application of GenAI in generating multimodal feedback remains largely unexplored. Our study investigates the use of GenAI techniques to generate multimodal feedback, aiming to provide this feedback for large cohorts of learners, thereby enhancing learning experience and engagement. By exploring the potential of GenAI for this purpose, we propose a framework for automating the generation of multimodal feedback, which we name MuFIN. | - |
dc.language | eng | - |
dc.relation.ispartof | L@S 2024 - Proceedings of the 11th ACM Conference on Learning @ Scale | - |
dc.subject | generative artificial intelligence | - |
dc.subject | large language models | - |
dc.subject | multimodal feedback | - |
dc.title | MuFIN: A Framework for Automating Multimodal Feedback Generation using Generative Artificial Intelligence | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1145/3657604.3664720 | - |
dc.identifier.scopus | eid_2-s2.0-85199913338 | - |
dc.identifier.spage | 550 | - |
dc.identifier.epage | 552 | - |