File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: QASCA: a quality-aware task assignment system for crowdsourcing applications

TitleQASCA: a quality-aware task assignment system for crowdsourcing applications
Authors
KeywordsCrowdsourcing
Quality control
Online task assignment
Issue Date2015
PublisherACM.
Citation
The 2015 ACM SIGMOD International Conference on Management of Data (SIGMOD '15), Melbourne, Australia, 22-27 June 2015. In Conference Proceedings, 2015, p. 1031-1046 How to Cite?
AbstractA crowdsourcing system, such as the Amazon Mechanical Turk (AMT), provides a platform for a large number of questions to be answered by Internet workers. Such systems have been shown to be useful to solve problems that are difficult for computers, including entity resolution, sentiment analysis, and image recognition. In this paper, we investigate the online task assignment problem: Given a pool of n questions, which of the k questions should be assigned to a worker? A poor assignment may not only waste time and money, but may also hurt the quality of a crowdsourcing application that depends on the workers' answers. We propose to consider quality measures (also known as evaluation metrics) that are relevant to an application during the task assignment process. Particularly, we explore how Accuracy and F-score, two widely-used evaluation metrics for crowdsourcing applications, can facilitate task assignment. Since these two metrics assume that the ground truth of a question is known, we study their variants that make use of the probability distributions derived from workers' answers. We further investigate online assignment strategies, which enables optimal task assignments. Since these algorithms are expensive, we propose solutions that attain high quality in linear time. We develop a system called the Quality-Aware Task Assignment System for Crowdsourcing Applications (QASCA) on top of AMT. We evaluate our approaches on five real crowdsourcing applications. We find that QASCA is efficient, and attains better result quality (of more than 8% improvement) compared with existing methods. © 2015 ACM, Inc.
Persistent Identifierhttp://hdl.handle.net/10722/213710
ISBN
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZheng, Y-
dc.contributor.authorWang, J-
dc.contributor.authorLi, G-
dc.contributor.authorCheng, RCK-
dc.contributor.authorFeng, J-
dc.date.accessioned2015-08-12T07:13:37Z-
dc.date.available2015-08-12T07:13:37Z-
dc.date.issued2015-
dc.identifier.citationThe 2015 ACM SIGMOD International Conference on Management of Data (SIGMOD '15), Melbourne, Australia, 22-27 June 2015. In Conference Proceedings, 2015, p. 1031-1046-
dc.identifier.isbn978-1-4503-2758-9-
dc.identifier.urihttp://hdl.handle.net/10722/213710-
dc.description.abstractA crowdsourcing system, such as the Amazon Mechanical Turk (AMT), provides a platform for a large number of questions to be answered by Internet workers. Such systems have been shown to be useful to solve problems that are difficult for computers, including entity resolution, sentiment analysis, and image recognition. In this paper, we investigate the online task assignment problem: Given a pool of n questions, which of the k questions should be assigned to a worker? A poor assignment may not only waste time and money, but may also hurt the quality of a crowdsourcing application that depends on the workers' answers. We propose to consider quality measures (also known as evaluation metrics) that are relevant to an application during the task assignment process. Particularly, we explore how Accuracy and F-score, two widely-used evaluation metrics for crowdsourcing applications, can facilitate task assignment. Since these two metrics assume that the ground truth of a question is known, we study their variants that make use of the probability distributions derived from workers' answers. We further investigate online assignment strategies, which enables optimal task assignments. Since these algorithms are expensive, we propose solutions that attain high quality in linear time. We develop a system called the Quality-Aware Task Assignment System for Crowdsourcing Applications (QASCA) on top of AMT. We evaluate our approaches on five real crowdsourcing applications. We find that QASCA is efficient, and attains better result quality (of more than 8% improvement) compared with existing methods. © 2015 ACM, Inc.-
dc.languageeng-
dc.publisherACM.-
dc.relation.ispartofSIGMOD '15 - Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data-
dc.subjectCrowdsourcing-
dc.subjectQuality control-
dc.subjectOnline task assignment-
dc.titleQASCA: a quality-aware task assignment system for crowdsourcing applications-
dc.typeConference_Paper-
dc.identifier.emailCheng, RCK: ckcheng@cs.hku.hk-
dc.identifier.authorityCheng, RCK=rp00074-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.doi10.1145/2723372.2749430-
dc.identifier.scopuseid_2-s2.0-84957566214-
dc.identifier.hkuros248523-
dc.identifier.spage1031-
dc.identifier.epage1046-
dc.identifier.isiWOS:000452535700080-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats