File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Scheduling for Mobile Edge Computing with Random User Arrivals – An Approximate MDP and Reinforcement Learning Approach

TitleScheduling for Mobile Edge Computing with Random User Arrivals – An Approximate MDP and Reinforcement Learning Approach
Authors
KeywordsTask analysis
Mobile handsets
Servers
Processor scheduling
Reinforcement learning
Issue Date2020
PublisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=25
Citation
IEEE Transactions on Vehicular Technology, 2020, v. 69 n. 7, p. 7735-7750 How to Cite?
AbstractIn this paper, we investigate the scheduling design of a mobile edge computing (MEC) system, where active mobile devices with computation tasks randomly appear in a cell. Every task can be computed at either the mobile device or the MEC server. We jointly optimize the task offloading decision, uplink transmission device selection and power allocation by formulating the problem as an infinite-horizon Markov decision process (MDP). Compared with most of the existing literature, this is the first attempt to address the transmission and computation optimization with random device arrivals in an infinite time horizon to our best knowledge. Due to the uncertainty in the device number and location, the conventional approximate MDP approaches addressing the curse of dimensionality cannot be applied. An alternative and suitable low-complexity solution framework is proposed in this work. We first introduce a baseline scheduling policy, whose value function can be derived analytically with the statistics of random mobile device arrivals. Then, one-step policy iteration is adopted to obtain a sub-optimal scheduling policy whose performance can be bounded analytically. The complexity of deriving the sub-optimal policy is reduced dramatically compared with conventional solutions of MDP by eliminating the complicated value iteration. To address a more general scenario where the statistics of random mobile device arrivals are unknown, a novel and efficient algorithm integrating reinforcement learning and stochastic gradient descent (SGD) is proposed to improve the system performance in an online manner. Simulation results show that the gain of the sub-optimal policy over various benchmarks is significant.
Persistent Identifierhttp://hdl.handle.net/10722/290904
ISSN
2021 Impact Factor: 6.239
2020 SCImago Journal Rankings: 1.365
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorHuang, S-
dc.contributor.authorLv, B-
dc.contributor.authorWang, R-
dc.contributor.authorHuang, K-
dc.date.accessioned2020-11-02T05:48:44Z-
dc.date.available2020-11-02T05:48:44Z-
dc.date.issued2020-
dc.identifier.citationIEEE Transactions on Vehicular Technology, 2020, v. 69 n. 7, p. 7735-7750-
dc.identifier.issn0018-9545-
dc.identifier.urihttp://hdl.handle.net/10722/290904-
dc.description.abstractIn this paper, we investigate the scheduling design of a mobile edge computing (MEC) system, where active mobile devices with computation tasks randomly appear in a cell. Every task can be computed at either the mobile device or the MEC server. We jointly optimize the task offloading decision, uplink transmission device selection and power allocation by formulating the problem as an infinite-horizon Markov decision process (MDP). Compared with most of the existing literature, this is the first attempt to address the transmission and computation optimization with random device arrivals in an infinite time horizon to our best knowledge. Due to the uncertainty in the device number and location, the conventional approximate MDP approaches addressing the curse of dimensionality cannot be applied. An alternative and suitable low-complexity solution framework is proposed in this work. We first introduce a baseline scheduling policy, whose value function can be derived analytically with the statistics of random mobile device arrivals. Then, one-step policy iteration is adopted to obtain a sub-optimal scheduling policy whose performance can be bounded analytically. The complexity of deriving the sub-optimal policy is reduced dramatically compared with conventional solutions of MDP by eliminating the complicated value iteration. To address a more general scenario where the statistics of random mobile device arrivals are unknown, a novel and efficient algorithm integrating reinforcement learning and stochastic gradient descent (SGD) is proposed to improve the system performance in an online manner. Simulation results show that the gain of the sub-optimal policy over various benchmarks is significant.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=25-
dc.relation.ispartofIEEE Transactions on Vehicular Technology-
dc.rightsIEEE Transactions on Vehicular Technology. Copyright © Institute of Electrical and Electronics Engineers.-
dc.rights©20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectTask analysis-
dc.subjectMobile handsets-
dc.subjectServers-
dc.subjectProcessor scheduling-
dc.subjectReinforcement learning-
dc.titleScheduling for Mobile Edge Computing with Random User Arrivals – An Approximate MDP and Reinforcement Learning Approach-
dc.typeArticle-
dc.identifier.emailHuang, K: huangkb@eee.hku.hk-
dc.identifier.authorityHuang, K=rp01875-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TVT.2020.2990482-
dc.identifier.scopuseid_2-s2.0-85088499598-
dc.identifier.hkuros318036-
dc.identifier.volume69-
dc.identifier.issue7-
dc.identifier.spage7735-
dc.identifier.epage7750-
dc.identifier.isiWOS:000549318100068-
dc.publisher.placeUnited States-
dc.identifier.issnl0018-9545-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats