File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Making AI Accessible for STEM Teachers: Using Explainable AI for Unpacking Classroom Discourse Analysis

TitleMaking AI Accessible for STEM Teachers: Using Explainable AI for Unpacking Classroom Discourse Analysis
Authors
Issue Date19-Jul-2024
PublisherIEEE Education Society
Citation
IEEE Transactions on Education, 2024 How to Cite?
Abstract

Contributions: To address the interpretability issues in artificial intelligence (AI)-powered classroom discourse models, we employ explainable AI methods to unpack classroom discourse analysis from deep learning-based models and evaluate the effects of model explanations on STEM teachers. Background: Deep learning techniques have been used to automatically analyze classroom dialogue to provide feedback for teachers. However, these complex models operate as black boxes, lacking clear explanations of the analysis, which may lead teachers, particularly those lacking AI knowledge, to distrust the models and hinder their teaching practice. Therefore, it is crucial to address the interpretability issue in AI-powered classroom discourse models. Research Questions: How to explain deep learning-based classroom discourse models using explainable AI methods? What is the effect of these explanations on teachers’ trust in and technology acceptance of the models? How do teachers perceive the explanations of deep learning-based classroom discourse models? Method: Two explainable AI methods were employed to interpret deep learning-based models that analyzed teacher and student talk moves. A pilot study was conducted, involving seven STEM teachers interested in learning talk moves and receiving classroom discourse analysis. The study assessed changes in teachers’ trust and technology acceptance before and after receiving model explanations. Teachers’ perceptions of the model explanations were investigated. Findings: The AI-powered classroom discourse models were effectively explained using explainable AI methods. The model explanations enhanced teachers’ trust and technology acceptance of the classroom discourse models. The seven STEM teachers expressed satisfaction with the explanations and provided their perception of model explanations.


Persistent Identifierhttp://hdl.handle.net/10722/345959
ISSN
2023 Impact Factor: 2.1
2023 SCImago Journal Rankings: 0.793

 

DC FieldValueLanguage
dc.contributor.authorWANG, Deliang-
dc.contributor.authorCHEN, Gaowei-
dc.date.accessioned2024-09-04T07:06:46Z-
dc.date.available2024-09-04T07:06:46Z-
dc.date.issued2024-07-19-
dc.identifier.citationIEEE Transactions on Education, 2024-
dc.identifier.issn0018-9359-
dc.identifier.urihttp://hdl.handle.net/10722/345959-
dc.description.abstract<p>Contributions: To address the interpretability issues in artificial intelligence (AI)-powered classroom discourse models, we employ explainable AI methods to unpack classroom discourse analysis from deep learning-based models and evaluate the effects of model explanations on STEM teachers. Background: Deep learning techniques have been used to automatically analyze classroom dialogue to provide feedback for teachers. However, these complex models operate as black boxes, lacking clear explanations of the analysis, which may lead teachers, particularly those lacking AI knowledge, to distrust the models and hinder their teaching practice. Therefore, it is crucial to address the interpretability issue in AI-powered classroom discourse models. Research Questions: How to explain deep learning-based classroom discourse models using explainable AI methods? What is the effect of these explanations on teachers’ trust in and technology acceptance of the models? How do teachers perceive the explanations of deep learning-based classroom discourse models? Method: Two explainable AI methods were employed to interpret deep learning-based models that analyzed teacher and student talk moves. A pilot study was conducted, involving seven STEM teachers interested in learning talk moves and receiving classroom discourse analysis. The study assessed changes in teachers’ trust and technology acceptance before and after receiving model explanations. Teachers’ perceptions of the model explanations were investigated. Findings: The AI-powered classroom discourse models were effectively explained using explainable AI methods. The model explanations enhanced teachers’ trust and technology acceptance of the classroom discourse models. The seven STEM teachers expressed satisfaction with the explanations and provided their perception of model explanations.<br></p>-
dc.languageeng-
dc.publisherIEEE Education Society-
dc.relation.ispartofIEEE Transactions on Education-
dc.titleMaking AI Accessible for STEM Teachers: Using Explainable AI for Unpacking Classroom Discourse Analysis-
dc.typeArticle-
dc.identifier.doi10.1109/TE.2024.3421606-
dc.identifier.eissn1557-9638-
dc.identifier.issnl0018-9359-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats