File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Machine Learning in RIS-Assisted NOMA IoT Networks

TitleMachine Learning in RIS-Assisted NOMA IoT Networks
Authors
KeywordsDeep learning (DL)
deep reinforcement learning (DRL)
Internet of Things (IoT) networks
nonorthogonal multiple access (NOMA)
reconfigurable intelligent surfaces (RISs)
Issue Date2023
Citation
IEEE Internet of Things Journal, 2023, v. 10, n. 22, p. 19427-19440 How to Cite?
AbstractA reconfigurable intelligent surface (RIS)-assisted downlink nonorthogonal multiple access (NOMA) Internet of Things (IoT) network is proposed, where a Quality-of-Service (QoS)-based NOMA clustering scheme is conceived to effectively utilize the limited wireless resources among IoT devices. A throughput maximization problem is formulated by jointly optimizing the phase shifts of the RIS and the power allocation of the base station (BS) from the short-term and long-term perspectives. We aim to investigate and compare the performance of deep learning (DL) and deep reinforcement learning (DRL) algorithms for solving the formulated problems. In particular, the DL method utilizes model-agnostic-metalearning (MAML) to enhance the generalization capability of the neural network and to accelerate the convergence rate. For the DRL method, the deep deterministic policy gradient (DDPG) algorithm is employed to incorporate continuous phase-shift variables. It shows that the DL method only focuses on the maximization of the instantaneous throughput, whereas the DRL method can coordinate the power consumption over different time slots to maximize the long-term throughput. Numerical results demonstrate that: 1) the proposed QoS-based NOMA clustering scheme achieves higher IoT throughput than the conventional channel-based scheme; 2) the implementation of RISs induces approximately 5%-25% throughput gain as the number of RIS elements increases from 8 to 64; 3) DL and DRL achieve a similar throughput performance for the short-term optimization, while DRL is superior for the long-term optimization, especially when the total transmit power is limited.
Persistent Identifierhttp://hdl.handle.net/10722/349879

 

DC FieldValueLanguage
dc.contributor.authorZou, Yixuan-
dc.contributor.authorLiu, Yuanwei-
dc.contributor.authorMu, Xidong-
dc.contributor.authorZhang, Xingqi-
dc.contributor.authorLiu, Yue-
dc.contributor.authorYuen, Chau-
dc.date.accessioned2024-10-17T07:01:35Z-
dc.date.available2024-10-17T07:01:35Z-
dc.date.issued2023-
dc.identifier.citationIEEE Internet of Things Journal, 2023, v. 10, n. 22, p. 19427-19440-
dc.identifier.urihttp://hdl.handle.net/10722/349879-
dc.description.abstractA reconfigurable intelligent surface (RIS)-assisted downlink nonorthogonal multiple access (NOMA) Internet of Things (IoT) network is proposed, where a Quality-of-Service (QoS)-based NOMA clustering scheme is conceived to effectively utilize the limited wireless resources among IoT devices. A throughput maximization problem is formulated by jointly optimizing the phase shifts of the RIS and the power allocation of the base station (BS) from the short-term and long-term perspectives. We aim to investigate and compare the performance of deep learning (DL) and deep reinforcement learning (DRL) algorithms for solving the formulated problems. In particular, the DL method utilizes model-agnostic-metalearning (MAML) to enhance the generalization capability of the neural network and to accelerate the convergence rate. For the DRL method, the deep deterministic policy gradient (DDPG) algorithm is employed to incorporate continuous phase-shift variables. It shows that the DL method only focuses on the maximization of the instantaneous throughput, whereas the DRL method can coordinate the power consumption over different time slots to maximize the long-term throughput. Numerical results demonstrate that: 1) the proposed QoS-based NOMA clustering scheme achieves higher IoT throughput than the conventional channel-based scheme; 2) the implementation of RISs induces approximately 5%-25% throughput gain as the number of RIS elements increases from 8 to 64; 3) DL and DRL achieve a similar throughput performance for the short-term optimization, while DRL is superior for the long-term optimization, especially when the total transmit power is limited.-
dc.languageeng-
dc.relation.ispartofIEEE Internet of Things Journal-
dc.subjectDeep learning (DL)-
dc.subjectdeep reinforcement learning (DRL)-
dc.subjectInternet of Things (IoT) networks-
dc.subjectnonorthogonal multiple access (NOMA)-
dc.subjectreconfigurable intelligent surfaces (RISs)-
dc.titleMachine Learning in RIS-Assisted NOMA IoT Networks-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/JIOT.2023.3245288-
dc.identifier.scopuseid_2-s2.0-85149425937-
dc.identifier.volume10-
dc.identifier.issue22-
dc.identifier.spage19427-
dc.identifier.epage19440-
dc.identifier.eissn2327-4662-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats