File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/ICDE48307.2020.00012
- Scopus: eid_2-s2.0-85085856670
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: An End-to-End Deep RL Framework for Task Arrangement in Crowdsourcing Platforms
Title | An End-to-End Deep RL Framework for Task Arrangement in Crowdsourcing Platforms |
---|---|
Authors | |
Keywords | crowdsourcing platform task arrangement reinforcement learning deep Q-Network |
Issue Date | 2020 |
Publisher | IEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000178 |
Citation | Proceedings of 2020 IEEE 36th International Conference on Data Engineering (ICDE), Dallas, TX, USA, 20-24 April 2020, p. 49-60 How to Cite? |
Abstract | In this paper, we propose a Deep Reinforcement Learning (RL) framework for task arrangement, which is a critical problem for the success of crowdsourcing platforms. Previous works conduct the personalized recommendation of tasks to workers via supervised learning methods. However, the majority of them only consider the benefit of either workers or requesters independently. In addition, they do not consider the real dynamic environments (e.g., dynamic tasks, dynamic workers), so they may produce sub-optimal results. To address these issues, we utilize Deep Q-Network (DQN), an RL-based method combined with a neural network to estimate the expected long-term return of recommending a task. DQN inherently considers the immediate and the future rewards and can be updated quickly to deal with evolving data and dynamic changes. Furthermore, we design two DQNs that capture the benefit of both workers and requesters and maximize the profit of the platform. To learn value functions in DQN effectively, we also propose novel state representations, carefully design the computation of Q values, and predict transition probabilities and future states. Experiments on synthetic and real datasets demonstrate the superior performance of our framework. |
Persistent Identifier | http://hdl.handle.net/10722/291189 |
ISSN | 2023 SCImago Journal Rankings: 1.306 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Shan, C | - |
dc.contributor.author | Mamoulis, N | - |
dc.contributor.author | Cheng, CKR | - |
dc.contributor.author | Li, G | - |
dc.contributor.author | Li, X | - |
dc.contributor.author | Qian, Y | - |
dc.date.accessioned | 2020-11-07T13:53:30Z | - |
dc.date.available | 2020-11-07T13:53:30Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | Proceedings of 2020 IEEE 36th International Conference on Data Engineering (ICDE), Dallas, TX, USA, 20-24 April 2020, p. 49-60 | - |
dc.identifier.issn | 1084-4627 | - |
dc.identifier.uri | http://hdl.handle.net/10722/291189 | - |
dc.description.abstract | In this paper, we propose a Deep Reinforcement Learning (RL) framework for task arrangement, which is a critical problem for the success of crowdsourcing platforms. Previous works conduct the personalized recommendation of tasks to workers via supervised learning methods. However, the majority of them only consider the benefit of either workers or requesters independently. In addition, they do not consider the real dynamic environments (e.g., dynamic tasks, dynamic workers), so they may produce sub-optimal results. To address these issues, we utilize Deep Q-Network (DQN), an RL-based method combined with a neural network to estimate the expected long-term return of recommending a task. DQN inherently considers the immediate and the future rewards and can be updated quickly to deal with evolving data and dynamic changes. Furthermore, we design two DQNs that capture the benefit of both workers and requesters and maximize the profit of the platform. To learn value functions in DQN effectively, we also propose novel state representations, carefully design the computation of Q values, and predict transition probabilities and future states. Experiments on synthetic and real datasets demonstrate the superior performance of our framework. | - |
dc.language | eng | - |
dc.publisher | IEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000178 | - |
dc.relation.ispartof | IEEE 36th International Conference on Data Engineering (ICDE) | - |
dc.rights | International Conference on Data Engineering. Proceedings. Copyright © IEEE Computer Society. | - |
dc.rights | ©2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | - |
dc.subject | crowdsourcing platform | - |
dc.subject | task arrangement | - |
dc.subject | reinforcement learning | - |
dc.subject | deep Q-Network | - |
dc.title | An End-to-End Deep RL Framework for Task Arrangement in Crowdsourcing Platforms | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Shan, C: sxdtgg@hku.hk | - |
dc.identifier.email | Mamoulis, N: nikos@cs.hku.hk | - |
dc.identifier.email | Cheng, CKR: ckcheng@cs.hku.hk | - |
dc.identifier.email | Li, X: xli2@hku.hk | - |
dc.identifier.authority | Mamoulis, N=rp00155 | - |
dc.identifier.authority | Cheng, CKR=rp00074 | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/ICDE48307.2020.00012 | - |
dc.identifier.scopus | eid_2-s2.0-85085856670 | - |
dc.identifier.hkuros | 318668 | - |
dc.identifier.spage | 49 | - |
dc.identifier.epage | 60 | - |
dc.publisher.place | United States | - |
dc.identifier.issnl | 1084-4627 | - |