File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1016/j.caeai.2023.100140
- Scopus: eid_2-s2.0-85159351677
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Can large language models write reflectively
| Title | Can large language models write reflectively |
|---|---|
| Authors | |
| Keywords | ChatGPT Generative language model Natural language processing Reflective writing |
| Issue Date | 2023 |
| Citation | Computers and Education: Artificial Intelligence, 2023, v. 4, article no. 100140 How to Cite? |
| Abstract | Generative Large Language Models (LLMs) demonstrate impressive results in different writing tasks and have already attracted much attention from researchers and practitioners. However, there is limited research to investigate the capability of generative LLMs for reflective writing. To this end, in the present study, we have extensively reviewed the existing literature and selected 9 representative prompting strategies for ChatGPT – the chatbot based on state-of-art generative LLMs to generate a diverse set of reflective responses, which are combined with student-written reflections. Next, those responses were evaluated by experienced teaching staff following a theory-aligned assessment rubric that was designed to evaluate student-generated reflections in several university-level pharmacy courses. Furthermore, we explored the extent to which Deep Learning classification methods can be utilised to automatically differentiate between reflective responses written by students vs. reflective responses generated by ChatGPT. To this end, we harnessed BERT, a state-of-art Deep Learning classifier, and compared the performance of this classifier to the performance of human evaluators and the AI content detector by OpenAI. Following our extensive experimentation, we found that (i) ChatGPT may be capable of generating high-quality reflective responses in writing assignments administered across different pharmacy courses, (ii) the quality of automatically generated reflective responses was higher in all six assessment criteria than the quality of student-written reflections; and (iii) a domain-specific BERT-based classifier could effectively differentiate between student-written and ChatGPT-generated reflections, greatly surpassing (up to 38% higher across four accuracy metrics) the classification performed by experienced teaching staff and general-domain classifier, even in cases where the testing prompts were not known at the time of model training. |
| Persistent Identifier | http://hdl.handle.net/10722/354276 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Li, Yuheng | - |
| dc.contributor.author | Sha, Lele | - |
| dc.contributor.author | Yan, Lixiang | - |
| dc.contributor.author | Lin, Jionghao | - |
| dc.contributor.author | Raković, Mladen | - |
| dc.contributor.author | Galbraith, Kirsten | - |
| dc.contributor.author | Lyons, Kayley | - |
| dc.contributor.author | Gašević, Dragan | - |
| dc.contributor.author | Chen, Guanliang | - |
| dc.date.accessioned | 2025-02-07T08:47:36Z | - |
| dc.date.available | 2025-02-07T08:47:36Z | - |
| dc.date.issued | 2023 | - |
| dc.identifier.citation | Computers and Education: Artificial Intelligence, 2023, v. 4, article no. 100140 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/354276 | - |
| dc.description.abstract | Generative Large Language Models (LLMs) demonstrate impressive results in different writing tasks and have already attracted much attention from researchers and practitioners. However, there is limited research to investigate the capability of generative LLMs for reflective writing. To this end, in the present study, we have extensively reviewed the existing literature and selected 9 representative prompting strategies for ChatGPT – the chatbot based on state-of-art generative LLMs to generate a diverse set of reflective responses, which are combined with student-written reflections. Next, those responses were evaluated by experienced teaching staff following a theory-aligned assessment rubric that was designed to evaluate student-generated reflections in several university-level pharmacy courses. Furthermore, we explored the extent to which Deep Learning classification methods can be utilised to automatically differentiate between reflective responses written by students vs. reflective responses generated by ChatGPT. To this end, we harnessed BERT, a state-of-art Deep Learning classifier, and compared the performance of this classifier to the performance of human evaluators and the AI content detector by OpenAI. Following our extensive experimentation, we found that (i) ChatGPT may be capable of generating high-quality reflective responses in writing assignments administered across different pharmacy courses, (ii) the quality of automatically generated reflective responses was higher in all six assessment criteria than the quality of student-written reflections; and (iii) a domain-specific BERT-based classifier could effectively differentiate between student-written and ChatGPT-generated reflections, greatly surpassing (up to 38% higher across four accuracy metrics) the classification performed by experienced teaching staff and general-domain classifier, even in cases where the testing prompts were not known at the time of model training. | - |
| dc.language | eng | - |
| dc.relation.ispartof | Computers and Education: Artificial Intelligence | - |
| dc.subject | ChatGPT | - |
| dc.subject | Generative language model | - |
| dc.subject | Natural language processing | - |
| dc.subject | Reflective writing | - |
| dc.title | Can large language models write reflectively | - |
| dc.type | Article | - |
| dc.description.nature | link_to_subscribed_fulltext | - |
| dc.identifier.doi | 10.1016/j.caeai.2023.100140 | - |
| dc.identifier.scopus | eid_2-s2.0-85159351677 | - |
| dc.identifier.volume | 4 | - |
| dc.identifier.spage | article no. 100140 | - |
| dc.identifier.epage | article no. 100140 | - |
| dc.identifier.eissn | 2666-920X | - |
