File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1177/0146621618798667
- Scopus: eid_2-s2.0-85059681803
- WOS: WOS:000481462100002
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Item response theory modeling for examinee-selected items with rater effect
Title | Item response theory modeling for examinee-selected items with rater effect |
---|---|
Authors | |
Keywords | examinee-selected items missing not at random rater severity |
Issue Date | 2019 |
Publisher | Sage Publications, Inc. The Journal's web site is located at http://www.sagepub.com/journal.aspx?pid=184 |
Citation | Applied Psychological Measurement, 2019, v. 43 n. 6, p. 435-448 How to Cite? |
Abstract | Some large-scale testing requires examinees to select and answer a fixed number of items from given items (e.g., select one out of the three items). Usually, they are constructed-response items that are marked by human raters. In this examinee-selected item (ESI) design, some examinees may benefit more than others from choosing easier items to answer, and so the missing data induced by the design become missing not at random (MNAR). Although item response theory (IRT) models have recently been developed to account for MNAR data in the ESI design, they do not consider the rater effect; thus, their utility is seriously restricted. In this study, two methods are developed: the first one is a new IRT model to account for both MNAR data and rater severity simultaneously, and the second one adapts conditional maximum likelihood estimation and pairwise estimation methods to the ESI design with the rater effect. A series of simulations was then conducted to compare their performance with those of conventional IRT models that ignored MNAR data or rater severity. The results indicated a good parameter recovery for the new model. The conditional maximum likelihood estimation and pairwise estimation methods were applicable when the Rasch models fit the data, but the conventional IRT models yielded biased parameter estimates. An empirical example was given to illustrate these new initiatives. © The Author(s) 2018. |
Persistent Identifier | http://hdl.handle.net/10722/274075 |
ISSN | 2023 Impact Factor: 1.0 2023 SCImago Journal Rankings: 1.061 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Liu, C | - |
dc.contributor.author | Qiu, X | - |
dc.contributor.author | Wang, W | - |
dc.date.accessioned | 2019-08-18T14:54:34Z | - |
dc.date.available | 2019-08-18T14:54:34Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | Applied Psychological Measurement, 2019, v. 43 n. 6, p. 435-448 | - |
dc.identifier.issn | 0146-6216 | - |
dc.identifier.uri | http://hdl.handle.net/10722/274075 | - |
dc.description.abstract | Some large-scale testing requires examinees to select and answer a fixed number of items from given items (e.g., select one out of the three items). Usually, they are constructed-response items that are marked by human raters. In this examinee-selected item (ESI) design, some examinees may benefit more than others from choosing easier items to answer, and so the missing data induced by the design become missing not at random (MNAR). Although item response theory (IRT) models have recently been developed to account for MNAR data in the ESI design, they do not consider the rater effect; thus, their utility is seriously restricted. In this study, two methods are developed: the first one is a new IRT model to account for both MNAR data and rater severity simultaneously, and the second one adapts conditional maximum likelihood estimation and pairwise estimation methods to the ESI design with the rater effect. A series of simulations was then conducted to compare their performance with those of conventional IRT models that ignored MNAR data or rater severity. The results indicated a good parameter recovery for the new model. The conditional maximum likelihood estimation and pairwise estimation methods were applicable when the Rasch models fit the data, but the conventional IRT models yielded biased parameter estimates. An empirical example was given to illustrate these new initiatives. © The Author(s) 2018. | - |
dc.language | eng | - |
dc.publisher | Sage Publications, Inc. The Journal's web site is located at http://www.sagepub.com/journal.aspx?pid=184 | - |
dc.relation.ispartof | Applied Psychological Measurement | - |
dc.rights | Applied Psychological Measurement. Copyright © Sage Publications, Inc. | - |
dc.subject | examinee-selected items | - |
dc.subject | missing not at random | - |
dc.subject | rater severity | - |
dc.title | Item response theory modeling for examinee-selected items with rater effect | - |
dc.type | Article | - |
dc.identifier.email | Qiu, X: xlqiu@hku.hk | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1177/0146621618798667 | - |
dc.identifier.scopus | eid_2-s2.0-85059681803 | - |
dc.identifier.hkuros | 301986 | - |
dc.identifier.volume | 43 | - |
dc.identifier.issue | 6 | - |
dc.identifier.spage | 435 | - |
dc.identifier.epage | 448 | - |
dc.identifier.isi | WOS:000481462100002 | - |
dc.publisher.place | United States | - |
dc.identifier.issnl | 0146-6216 | - |