File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: MuFIN: A Framework for Automating Multimodal Feedback Generation using Generative Artificial Intelligence

TitleMuFIN: A Framework for Automating Multimodal Feedback Generation using Generative Artificial Intelligence
Authors
Keywordsgenerative artificial intelligence
large language models
multimodal feedback
Issue Date2024
Citation
L@S 2024 - Proceedings of the 11th ACM Conference on Learning @ Scale, 2024, p. 550-552 How to Cite?
AbstractWritten feedback has long been a cornerstone in educational and professional settings, essential for enhancing learning outcomes. However, multimodal feedback-integrating textual, auditory, and visual cues-promises a more engaging and effective learning experience. By leveraging multiple sensory channels, multimodal feedback better accommodates diverse learning preferences and aids in deeper information retention. Despite its potential, creating multimodal feedback poses challenges, including the need for increased time and resources. Recent advancements in generative artificial intelligence (GenAI) offer solutions to automate the feedback process, predominantly focusing on textual feedback. Yet, the application of GenAI in generating multimodal feedback remains largely unexplored. Our study investigates the use of GenAI techniques to generate multimodal feedback, aiming to provide this feedback for large cohorts of learners, thereby enhancing learning experience and engagement. By exploring the potential of GenAI for this purpose, we propose a framework for automating the generation of multimodal feedback, which we name MuFIN.
Persistent Identifierhttp://hdl.handle.net/10722/354346

 

DC FieldValueLanguage
dc.contributor.authorLin, Jionghao-
dc.contributor.authorChen, Eason-
dc.contributor.authorGurung, Ashish-
dc.contributor.authorKoedinger, Kenneth R.-
dc.date.accessioned2025-02-07T08:48:02Z-
dc.date.available2025-02-07T08:48:02Z-
dc.date.issued2024-
dc.identifier.citationL@S 2024 - Proceedings of the 11th ACM Conference on Learning @ Scale, 2024, p. 550-552-
dc.identifier.urihttp://hdl.handle.net/10722/354346-
dc.description.abstractWritten feedback has long been a cornerstone in educational and professional settings, essential for enhancing learning outcomes. However, multimodal feedback-integrating textual, auditory, and visual cues-promises a more engaging and effective learning experience. By leveraging multiple sensory channels, multimodal feedback better accommodates diverse learning preferences and aids in deeper information retention. Despite its potential, creating multimodal feedback poses challenges, including the need for increased time and resources. Recent advancements in generative artificial intelligence (GenAI) offer solutions to automate the feedback process, predominantly focusing on textual feedback. Yet, the application of GenAI in generating multimodal feedback remains largely unexplored. Our study investigates the use of GenAI techniques to generate multimodal feedback, aiming to provide this feedback for large cohorts of learners, thereby enhancing learning experience and engagement. By exploring the potential of GenAI for this purpose, we propose a framework for automating the generation of multimodal feedback, which we name MuFIN.-
dc.languageeng-
dc.relation.ispartofL@S 2024 - Proceedings of the 11th ACM Conference on Learning @ Scale-
dc.subjectgenerative artificial intelligence-
dc.subjectlarge language models-
dc.subjectmultimodal feedback-
dc.titleMuFIN: A Framework for Automating Multimodal Feedback Generation using Generative Artificial Intelligence-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3657604.3664720-
dc.identifier.scopuseid_2-s2.0-85199913338-
dc.identifier.spage550-
dc.identifier.epage552-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats