File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/ICC.2019.8761349
- Scopus: eid_2-s2.0-85070237774
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Deep Reinforcement Learning in Cache-Aided MEC Networks
Title | Deep Reinforcement Learning in Cache-Aided MEC Networks |
---|---|
Authors | |
Issue Date | 2019 |
Citation | IEEE International Conference on Communications, 2019, v. 2019-May, article no. 8761349 How to Cite? |
Abstract | A novel resource allocation scheme for cache-aided mobile-edge computing (MEC) is proposed, to efficiently offer communication, storage and computing service for intensive computation and sensitive latency computational tasks. In this paper, the considered resource allocation problem is formulated as a mixed integer non-linear program (MINLP) that involves a joint optimization of tasks offloading decision, cache allocation, computation allocation, and dynamic power distribution. To tackle this non-trivial problem, Markov decision process (MDP) is invoked for mobile users and the access point (AP) to learn the optimal offloading and resource allocation policy from historical experience and automatically improve allocation efficiency. In particular, to break the curse of high dimensionality in the state space of MDP, a deep reinforcement learning (DRL) algorithm is proposed to solve this optimization problem with low complexity. Moreover, extensive simulations demonstrate that the proposed algorithm is capable of achieving a quasi-optimal performance under various system setups, and significantly outperform the other representative benchmark methods considered. The effectiveness of the proposed algorithm is confirmed from the comparison with the results of the optimal solution. |
Persistent Identifier | http://hdl.handle.net/10722/349339 |
ISSN |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yang, Zhong | - |
dc.contributor.author | Liu, Yuanwei | - |
dc.contributor.author | Chen, Yue | - |
dc.contributor.author | Tyson, Gareth | - |
dc.date.accessioned | 2024-10-17T06:57:52Z | - |
dc.date.available | 2024-10-17T06:57:52Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | IEEE International Conference on Communications, 2019, v. 2019-May, article no. 8761349 | - |
dc.identifier.issn | 1550-3607 | - |
dc.identifier.uri | http://hdl.handle.net/10722/349339 | - |
dc.description.abstract | A novel resource allocation scheme for cache-aided mobile-edge computing (MEC) is proposed, to efficiently offer communication, storage and computing service for intensive computation and sensitive latency computational tasks. In this paper, the considered resource allocation problem is formulated as a mixed integer non-linear program (MINLP) that involves a joint optimization of tasks offloading decision, cache allocation, computation allocation, and dynamic power distribution. To tackle this non-trivial problem, Markov decision process (MDP) is invoked for mobile users and the access point (AP) to learn the optimal offloading and resource allocation policy from historical experience and automatically improve allocation efficiency. In particular, to break the curse of high dimensionality in the state space of MDP, a deep reinforcement learning (DRL) algorithm is proposed to solve this optimization problem with low complexity. Moreover, extensive simulations demonstrate that the proposed algorithm is capable of achieving a quasi-optimal performance under various system setups, and significantly outperform the other representative benchmark methods considered. The effectiveness of the proposed algorithm is confirmed from the comparison with the results of the optimal solution. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE International Conference on Communications | - |
dc.title | Deep Reinforcement Learning in Cache-Aided MEC Networks | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/ICC.2019.8761349 | - |
dc.identifier.scopus | eid_2-s2.0-85070237774 | - |
dc.identifier.volume | 2019-May | - |
dc.identifier.spage | article no. 8761349 | - |
dc.identifier.epage | article no. 8761349 | - |