File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/ICCC47050.2019.9064365
- Scopus: eid_2-s2.0-85084076935
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: A Deep Reinforcement Learning based Traffic Offloading Scheme for Vehicular Networks
Title | A Deep Reinforcement Learning based Traffic Offloading Scheme for Vehicular Networks |
---|---|
Authors | |
Keywords | Mobile Edge Computing Computation Offloading Deep Reinforcement Learning Internet of Vehicles |
Issue Date | 2019 |
Publisher | IEEE. |
Citation | 2019 IEEE 5th International Conference on Computer and Communications (ICCC 2019), Chengdu, China, 6-9 December 2019. In 2019 IEEE 5th International Conference on Computer and Communications (ICCC). In How to Cite? |
Abstract | With the emergence of pervasive mobile devices, mobile cloud computing cannot fully meet the user demands, which promotes the birth of Mobile Edge Computing (MEC). Tasks could be offloaded to the MEC servers when the ability of mobile devices to process data does not satisfy its own needs. With the introduction of 5G and the development of Internet of Vehicles (IoV), the data generated by vehicles and passengers would require more computing tasks. In this paper, we use the deep reinforcement learning based method to offload the computation tasks by MEC. The evaluation of single user mobile edge offloading is first implemented, and then two deep reinforcement learning based algorithms are compared and analyzed. Then the comparison experiments are extended to the multi-user situation. After that, the suitable learning rates of computation offloading for IoV in MEC using deep deterministic policy gradient algorithm can be found. The experimental results demonstrate the efficiency of the designed offloading scheme. |
Persistent Identifier | http://hdl.handle.net/10722/276096 |
ISBN |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Guo, Y | - |
dc.contributor.author | Ning, Z | - |
dc.contributor.author | Kwok, YK | - |
dc.date.accessioned | 2019-09-10T02:55:52Z | - |
dc.date.available | 2019-09-10T02:55:52Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | 2019 IEEE 5th International Conference on Computer and Communications (ICCC 2019), Chengdu, China, 6-9 December 2019. In 2019 IEEE 5th International Conference on Computer and Communications (ICCC). In | - |
dc.identifier.isbn | 9781728147437 | - |
dc.identifier.uri | http://hdl.handle.net/10722/276096 | - |
dc.description.abstract | With the emergence of pervasive mobile devices, mobile cloud computing cannot fully meet the user demands, which promotes the birth of Mobile Edge Computing (MEC). Tasks could be offloaded to the MEC servers when the ability of mobile devices to process data does not satisfy its own needs. With the introduction of 5G and the development of Internet of Vehicles (IoV), the data generated by vehicles and passengers would require more computing tasks. In this paper, we use the deep reinforcement learning based method to offload the computation tasks by MEC. The evaluation of single user mobile edge offloading is first implemented, and then two deep reinforcement learning based algorithms are compared and analyzed. Then the comparison experiments are extended to the multi-user situation. After that, the suitable learning rates of computation offloading for IoV in MEC using deep deterministic policy gradient algorithm can be found. The experimental results demonstrate the efficiency of the designed offloading scheme. | - |
dc.language | eng | - |
dc.publisher | IEEE. | - |
dc.relation.ispartof | 2019 IEEE 5th International Conference on Computer and Communications (ICCC) | - |
dc.rights | 2019 IEEE 5th International Conference on Computer and Communications (ICCC). Copyright © IEEE. | - |
dc.rights | ©20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | - |
dc.subject | Mobile Edge Computing | - |
dc.subject | Computation Offloading | - |
dc.subject | Deep Reinforcement Learning | - |
dc.subject | Internet of Vehicles | - |
dc.title | A Deep Reinforcement Learning based Traffic Offloading Scheme for Vehicular Networks | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Ning, Z: zning@hku.hk | - |
dc.identifier.email | Kwok, YK: ykwok@hku.hk | - |
dc.identifier.authority | Kwok, YK=rp00128 | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/ICCC47050.2019.9064365 | - |
dc.identifier.scopus | eid_2-s2.0-85084076935 | - |
dc.identifier.hkuros | 303925 | - |
dc.publisher.place | Chengdu, China | - |