File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: How Can I Get It Right? Using GPT to Rephrase Incorrect Trainee Responses

TitleHow Can I Get It Right? Using GPT to Rephrase Incorrect Trainee Responses
Authors
KeywordsChatGPT
Feedback
GPT-4
Large language models
Tutoring training
Issue Date2024
Citation
International Journal of Artificial Intelligence in Education, 2024 How to Cite?
AbstractOne-on-one tutoring is widely acknowledged as an effective instructional method, conditioned on qualified tutors. However, the high demand for qualified tutors remains a challenge, often necessitating the training of novice tutors (i.e., trainees) to ensure effective tutoring. Research suggests that providing timely explanatory feedback can facilitate the training process for trainees. However, it presents challenges due to the time-consuming nature of assessing trainee performance by human experts. Inspired by the recent advancements of large language models (LLMs), our study employed the GPT-4 model to build an explanatory feedback system. This system identifies trainees’ responses in binary form (i.e., correct/incorrect) and automatically provides template-based feedback with responses appropriately rephrased by the GPT-4 model. We conducted our study using the responses of 383 trainees from three training lessons (Giving Effective Praise, Reacting to Errors, and Determining What Students Know). Our findings indicate that: 1) using a few-shot approach, the GPT-4 model effectively identifies correct/incorrect trainees’ responses from three training lessons with an average F1 score of 0.84 and AUC score of 0.85; and 2) using the few-shot approach, the GPT-4 model adeptly rephrases incorrect trainees’ responses into desired responses, achieving performance comparable to that of human experts.
Persistent Identifierhttp://hdl.handle.net/10722/354340
ISSN
2023 Impact Factor: 4.7
2023 SCImago Journal Rankings: 1.842
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLin, Jionghao-
dc.contributor.authorHan, Zifei-
dc.contributor.authorThomas, Danielle R.-
dc.contributor.authorGurung, Ashish-
dc.contributor.authorGupta, Shivang-
dc.contributor.authorAleven, Vincent-
dc.contributor.authorKoedinger, Kenneth R.-
dc.date.accessioned2025-02-07T08:48:00Z-
dc.date.available2025-02-07T08:48:00Z-
dc.date.issued2024-
dc.identifier.citationInternational Journal of Artificial Intelligence in Education, 2024-
dc.identifier.issn1560-4292-
dc.identifier.urihttp://hdl.handle.net/10722/354340-
dc.description.abstractOne-on-one tutoring is widely acknowledged as an effective instructional method, conditioned on qualified tutors. However, the high demand for qualified tutors remains a challenge, often necessitating the training of novice tutors (i.e., trainees) to ensure effective tutoring. Research suggests that providing timely explanatory feedback can facilitate the training process for trainees. However, it presents challenges due to the time-consuming nature of assessing trainee performance by human experts. Inspired by the recent advancements of large language models (LLMs), our study employed the GPT-4 model to build an explanatory feedback system. This system identifies trainees’ responses in binary form (i.e., correct/incorrect) and automatically provides template-based feedback with responses appropriately rephrased by the GPT-4 model. We conducted our study using the responses of 383 trainees from three training lessons (Giving Effective Praise, Reacting to Errors, and Determining What Students Know). Our findings indicate that: 1) using a few-shot approach, the GPT-4 model effectively identifies correct/incorrect trainees’ responses from three training lessons with an average F1 score of 0.84 and AUC score of 0.85; and 2) using the few-shot approach, the GPT-4 model adeptly rephrases incorrect trainees’ responses into desired responses, achieving performance comparable to that of human experts.-
dc.languageeng-
dc.relation.ispartofInternational Journal of Artificial Intelligence in Education-
dc.subjectChatGPT-
dc.subjectFeedback-
dc.subjectGPT-4-
dc.subjectLarge language models-
dc.subjectTutoring training-
dc.titleHow Can I Get It Right? Using GPT to Rephrase Incorrect Trainee Responses-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/s40593-024-00408-y-
dc.identifier.scopuseid_2-s2.0-85197722280-
dc.identifier.eissn1560-4306-
dc.identifier.isiWOS:001264576500002-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats