File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Reinforcement learning for user clustering in NOMA-enabled uplink IoT

TitleReinforcement learning for user clustering in NOMA-enabled uplink IoT
Authors
Issue Date2020
Citation
2020 IEEE International Conference on Communications Workshops, ICC Workshops 2020 - Proceedings, 2020, article no. 9145187 How to Cite?
AbstractThe model-driven algorithms have been investigated in wireless communications for decades. Presently, the model-free methods based on machine learning techniques are rapidly being developed in the field of non-orthogonal multiple access (NOMA) to dynamically optimize multiples parameters (e.g., number of resource blocks and QoS). With the aid of SARSA Q-learning and Deep reinforcement Learning (DRL), in this paper, we proposed a user clustering-based resource allocation with uplink NOMA techniques in multi-cell systems. It performs user grouping based on network traffic to efficiently utilise the available resources, we apply SARSA Q-learning to light and DRL to heavy network traffic. To characterize the performance of the proposed optimization algorithms, achieved the capacity for all the users is used to define the reward function. The proposed SARSA Q-learning and DRL algorithms are capable of assisting base-stations to efficiently assign available resources to IoT users considering different traffic conditions. As a result, simulation outcomes show that both the algorithms, SARSA Q-learning and DRL performed better than orthogonal multiple access (OMA) in all the experiments and converged with maximum sum-rate.
Persistent Identifierhttp://hdl.handle.net/10722/349465

 

DC FieldValueLanguage
dc.contributor.authorAhsan, Waleed-
dc.contributor.authorYi, Wenqiang-
dc.contributor.authorLiu, Yuanwei-
dc.contributor.authorQin, Zhijin-
dc.contributor.authorNallanathan, Arumugam-
dc.date.accessioned2024-10-17T06:58:43Z-
dc.date.available2024-10-17T06:58:43Z-
dc.date.issued2020-
dc.identifier.citation2020 IEEE International Conference on Communications Workshops, ICC Workshops 2020 - Proceedings, 2020, article no. 9145187-
dc.identifier.urihttp://hdl.handle.net/10722/349465-
dc.description.abstractThe model-driven algorithms have been investigated in wireless communications for decades. Presently, the model-free methods based on machine learning techniques are rapidly being developed in the field of non-orthogonal multiple access (NOMA) to dynamically optimize multiples parameters (e.g., number of resource blocks and QoS). With the aid of SARSA Q-learning and Deep reinforcement Learning (DRL), in this paper, we proposed a user clustering-based resource allocation with uplink NOMA techniques in multi-cell systems. It performs user grouping based on network traffic to efficiently utilise the available resources, we apply SARSA Q-learning to light and DRL to heavy network traffic. To characterize the performance of the proposed optimization algorithms, achieved the capacity for all the users is used to define the reward function. The proposed SARSA Q-learning and DRL algorithms are capable of assisting base-stations to efficiently assign available resources to IoT users considering different traffic conditions. As a result, simulation outcomes show that both the algorithms, SARSA Q-learning and DRL performed better than orthogonal multiple access (OMA) in all the experiments and converged with maximum sum-rate.-
dc.languageeng-
dc.relation.ispartof2020 IEEE International Conference on Communications Workshops, ICC Workshops 2020 - Proceedings-
dc.titleReinforcement learning for user clustering in NOMA-enabled uplink IoT-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICCWorkshops49005.2020.9145187-
dc.identifier.scopuseid_2-s2.0-85090283282-
dc.identifier.spagearticle no. 9145187-
dc.identifier.epagearticle no. 9145187-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats