File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/LCOMM.2021.3093385
- Scopus: eid_2-s2.0-85112159692
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Adaptive Reinforcement Learning Framework for NOMA-UAV Networks
Title | Adaptive Reinforcement Learning Framework for NOMA-UAV Networks |
---|---|
Authors | |
Keywords | Fuzzy Joint maximum likelihood detection Multi-armed bandits (MAB) Non-orthogonal multiple access (NOMA) Reinforcement learning (RL) Unmanned aerial vehicles (UAV) |
Issue Date | 2021 |
Citation | IEEE Communications Letters, 2021, v. 25, n. 9, p. 2943-2947 How to Cite? |
Abstract | We propose an adaptive reinforcement learning (A-RL) framework to maximize the sum-rate for non-orthogonal multiple access-unmanned aerial vehicle (NOMA-UAV) network. In this framework, Mamdani fuzzy inference system (MFIS) supervises a reinforcement learning (RL) policy based on multi-armed bandits (MAB). UAV as learning agent serves an internet of things (IoT) region. It manages an interference affected, channel block for NOMA uplink. Sum-rate, rate outage probability and average bit error rate (BER) for far-user are compared. Simulations reveal superior performance of A-RL, compared to non-adaptive RL counterpart. Joint maximum likelihood detection (JMLD) and successive interference cancellation (SIC) are also compared for BER performance and implementation complexity. |
Persistent Identifier | http://hdl.handle.net/10722/349586 |
ISSN | 2023 Impact Factor: 3.7 2023 SCImago Journal Rankings: 1.887 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Mahmud, Syed Khurram | - |
dc.contributor.author | Liu, Yuanwei | - |
dc.contributor.author | Chen, Yue | - |
dc.contributor.author | Chai, Kok Keong | - |
dc.date.accessioned | 2024-10-17T06:59:31Z | - |
dc.date.available | 2024-10-17T06:59:31Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | IEEE Communications Letters, 2021, v. 25, n. 9, p. 2943-2947 | - |
dc.identifier.issn | 1089-7798 | - |
dc.identifier.uri | http://hdl.handle.net/10722/349586 | - |
dc.description.abstract | We propose an adaptive reinforcement learning (A-RL) framework to maximize the sum-rate for non-orthogonal multiple access-unmanned aerial vehicle (NOMA-UAV) network. In this framework, Mamdani fuzzy inference system (MFIS) supervises a reinforcement learning (RL) policy based on multi-armed bandits (MAB). UAV as learning agent serves an internet of things (IoT) region. It manages an interference affected, channel block for NOMA uplink. Sum-rate, rate outage probability and average bit error rate (BER) for far-user are compared. Simulations reveal superior performance of A-RL, compared to non-adaptive RL counterpart. Joint maximum likelihood detection (JMLD) and successive interference cancellation (SIC) are also compared for BER performance and implementation complexity. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Communications Letters | - |
dc.subject | Fuzzy | - |
dc.subject | Joint maximum likelihood detection | - |
dc.subject | Multi-armed bandits (MAB) | - |
dc.subject | Non-orthogonal multiple access (NOMA) | - |
dc.subject | Reinforcement learning (RL) | - |
dc.subject | Unmanned aerial vehicles (UAV) | - |
dc.title | Adaptive Reinforcement Learning Framework for NOMA-UAV Networks | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/LCOMM.2021.3093385 | - |
dc.identifier.scopus | eid_2-s2.0-85112159692 | - |
dc.identifier.volume | 25 | - |
dc.identifier.issue | 9 | - |
dc.identifier.spage | 2943 | - |
dc.identifier.epage | 2947 | - |
dc.identifier.eissn | 1558-2558 | - |