File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TCCN.2019.2930521
- Scopus: eid_2-s2.0-85076722516
- WOS: WOS:000502789700018
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Deep Reinforcement Learning for Intelligent Internet of Vehicles: An Energy-Efficient Computational Offloading Scheme
Title | Deep Reinforcement Learning for Intelligent Internet of Vehicles: An Energy-Efficient Computational Offloading Scheme |
---|---|
Authors | |
Keywords | Internet of vehicles Deep reinforcement learning Computation offloading Energy efficiency |
Issue Date | 2019 |
Publisher | Institute of Electrical and Electronics Engineers. The Journal's web site is located at https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6687307 |
Citation | IEEE Transactions on Cognitive Communications and Networking, 2019, v. 5 n. 4, p. 1060-1072 How to Cite? |
Abstract | The emerging vehicular services call for updated communication and computing platforms. Fog computing, whose infrastructure is deployed in close proximity to terminals, extends the facilities of cloud computing. However, due to the limitation of vehicular fog nodes, it is challenging to satisfy the quality of experiences of users, calling for intelligent networks with updated computing abilities. This paper constructs a three-layer offloading framework in intelligent Internet of Vehicles (IoV) to minimize the overall energy consumption while satisfying the delay constraint of users. Due to its high computational complexity, the formulated problem is decomposed into two parts: flow redirection and offloading decision. After that, a deep reinforcement learning based scheme is put forward to solve the optimization problem. Performance evaluations based on real-world traces of taxis in Shanghai (China) demonstrate the effectiveness of our methods, where average energy consumption can be decreased by around 60 percent compared with the baseline algorithm. |
Persistent Identifier | http://hdl.handle.net/10722/275022 |
ISSN | 2023 Impact Factor: 7.4 2023 SCImago Journal Rankings: 3.371 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ning, Z | - |
dc.contributor.author | Dong, P | - |
dc.contributor.author | Wang, X | - |
dc.contributor.author | Guo, L | - |
dc.contributor.author | Rodrigues, JJPC | - |
dc.contributor.author | Kong, X | - |
dc.contributor.author | Huang, J | - |
dc.contributor.author | Kwok, RYK | - |
dc.date.accessioned | 2019-09-10T02:33:51Z | - |
dc.date.available | 2019-09-10T02:33:51Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | IEEE Transactions on Cognitive Communications and Networking, 2019, v. 5 n. 4, p. 1060-1072 | - |
dc.identifier.issn | 2332-7731 | - |
dc.identifier.uri | http://hdl.handle.net/10722/275022 | - |
dc.description.abstract | The emerging vehicular services call for updated communication and computing platforms. Fog computing, whose infrastructure is deployed in close proximity to terminals, extends the facilities of cloud computing. However, due to the limitation of vehicular fog nodes, it is challenging to satisfy the quality of experiences of users, calling for intelligent networks with updated computing abilities. This paper constructs a three-layer offloading framework in intelligent Internet of Vehicles (IoV) to minimize the overall energy consumption while satisfying the delay constraint of users. Due to its high computational complexity, the formulated problem is decomposed into two parts: flow redirection and offloading decision. After that, a deep reinforcement learning based scheme is put forward to solve the optimization problem. Performance evaluations based on real-world traces of taxis in Shanghai (China) demonstrate the effectiveness of our methods, where average energy consumption can be decreased by around 60 percent compared with the baseline algorithm. | - |
dc.language | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers. The Journal's web site is located at https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6687307 | - |
dc.relation.ispartof | IEEE Transactions on Cognitive Communications and Networking | - |
dc.subject | Internet of vehicles | - |
dc.subject | Deep reinforcement learning | - |
dc.subject | Computation offloading | - |
dc.subject | Energy efficiency | - |
dc.title | Deep Reinforcement Learning for Intelligent Internet of Vehicles: An Energy-Efficient Computational Offloading Scheme | - |
dc.type | Article | - |
dc.identifier.email | Ning, Z: zning@hku.hk | - |
dc.identifier.email | Kwok, RYK: ykwok@hku.hk | - |
dc.identifier.authority | Kwok, RYK=rp00128 | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TCCN.2019.2930521 | - |
dc.identifier.scopus | eid_2-s2.0-85076722516 | - |
dc.identifier.hkuros | 303924 | - |
dc.identifier.volume | 5 | - |
dc.identifier.issue | 4 | - |
dc.identifier.spage | 1060 | - |
dc.identifier.epage | 1072 | - |
dc.identifier.isi | WOS:000502789700018 | - |
dc.publisher.place | United States | - |
dc.identifier.issnl | 2332-7731 | - |