File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Comparative Analysis of GPT-4 and Human Graders in Evaluating Praise Given to Students in Synthetic Dialogues

TitleComparative Analysis of GPT-4 and Human Graders in Evaluating Praise Given to Students in Synthetic Dialogues
Authors
KeywordsChatGPT
GPT-4
Math tutors
Real-time Feedback
Tutor Evaluation
Tutor Feedback
Tutor Training
Issue Date2023
Citation
CEUR Workshop Proceedings, 2023, v. 3491, p. 37-48 How to Cite?
AbstractResearch suggests that providing specific and timely feedback to human tutors enhances their performance. However, it presents challenges due to the time-consuming nature of assessing tutor performance by human evaluators. Large language models, such as the AI-chatbot ChatGPT, hold potential for offering constructive feedback to tutors in practical settings. Nevertheless, the accuracy of AI-generated feedback remains uncertain, with scant research investigating the ability of models like ChatGPT to deliver effective feedback. In this work-in-progress, we evaluate 30 dialogues generated by GPT-4 in a tutor-student setting. We use two different prompting approaches, the zero-shot chain of thought and the few-shot chain of thought, to identify specific components of effective praise based on five criteria. These approaches are then compared to the results of human graders for accuracy. Our goal is to assess the extent to which GPT-4 can accurately identify each praise criterion. We found that both zero-shot and few-shot chain of thought approaches yield comparable results. GPT-4 performs moderately well in identifying instances when the tutor offers specific and immediate praise. However, GPT-4 underperforms in identifying the tutor’s ability to deliver sincere praise, particularly in the zero-shot prompting scenario where examples of sincere tutor praise statements were not provided. Future work will focus on enhancing prompt engineering, developing a more general tutoring rubric, and evaluating our method using real-life tutoring dialogues.
Persistent Identifierhttp://hdl.handle.net/10722/354300
ISSN
2023 SCImago Journal Rankings: 0.191

 

DC FieldValueLanguage
dc.contributor.authorHirunyasiri, Dollaya-
dc.contributor.authorThomas, Danielle R.-
dc.contributor.authorLin, Jionghao-
dc.contributor.authorKoedinger, Kenneth R.-
dc.contributor.authorAleven, Vincent-
dc.date.accessioned2025-02-07T08:47:45Z-
dc.date.available2025-02-07T08:47:45Z-
dc.date.issued2023-
dc.identifier.citationCEUR Workshop Proceedings, 2023, v. 3491, p. 37-48-
dc.identifier.issn1613-0073-
dc.identifier.urihttp://hdl.handle.net/10722/354300-
dc.description.abstractResearch suggests that providing specific and timely feedback to human tutors enhances their performance. However, it presents challenges due to the time-consuming nature of assessing tutor performance by human evaluators. Large language models, such as the AI-chatbot ChatGPT, hold potential for offering constructive feedback to tutors in practical settings. Nevertheless, the accuracy of AI-generated feedback remains uncertain, with scant research investigating the ability of models like ChatGPT to deliver effective feedback. In this work-in-progress, we evaluate 30 dialogues generated by GPT-4 in a tutor-student setting. We use two different prompting approaches, the zero-shot chain of thought and the few-shot chain of thought, to identify specific components of effective praise based on five criteria. These approaches are then compared to the results of human graders for accuracy. Our goal is to assess the extent to which GPT-4 can accurately identify each praise criterion. We found that both zero-shot and few-shot chain of thought approaches yield comparable results. GPT-4 performs moderately well in identifying instances when the tutor offers specific and immediate praise. However, GPT-4 underperforms in identifying the tutor’s ability to deliver sincere praise, particularly in the zero-shot prompting scenario where examples of sincere tutor praise statements were not provided. Future work will focus on enhancing prompt engineering, developing a more general tutoring rubric, and evaluating our method using real-life tutoring dialogues.-
dc.languageeng-
dc.relation.ispartofCEUR Workshop Proceedings-
dc.subjectChatGPT-
dc.subjectGPT-4-
dc.subjectMath tutors-
dc.subjectReal-time Feedback-
dc.subjectTutor Evaluation-
dc.subjectTutor Feedback-
dc.subjectTutor Training-
dc.titleComparative Analysis of GPT-4 and Human Graders in Evaluating Praise Given to Students in Synthetic Dialogues-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85174920490-
dc.identifier.volume3491-
dc.identifier.spage37-
dc.identifier.epage48-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats