File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/JIOT.2023.3245288
- Scopus: eid_2-s2.0-85149425937
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Machine Learning in RIS-Assisted NOMA IoT Networks
Title | Machine Learning in RIS-Assisted NOMA IoT Networks |
---|---|
Authors | |
Keywords | Deep learning (DL) deep reinforcement learning (DRL) Internet of Things (IoT) networks nonorthogonal multiple access (NOMA) reconfigurable intelligent surfaces (RISs) |
Issue Date | 2023 |
Citation | IEEE Internet of Things Journal, 2023, v. 10, n. 22, p. 19427-19440 How to Cite? |
Abstract | A reconfigurable intelligent surface (RIS)-assisted downlink nonorthogonal multiple access (NOMA) Internet of Things (IoT) network is proposed, where a Quality-of-Service (QoS)-based NOMA clustering scheme is conceived to effectively utilize the limited wireless resources among IoT devices. A throughput maximization problem is formulated by jointly optimizing the phase shifts of the RIS and the power allocation of the base station (BS) from the short-term and long-term perspectives. We aim to investigate and compare the performance of deep learning (DL) and deep reinforcement learning (DRL) algorithms for solving the formulated problems. In particular, the DL method utilizes model-agnostic-metalearning (MAML) to enhance the generalization capability of the neural network and to accelerate the convergence rate. For the DRL method, the deep deterministic policy gradient (DDPG) algorithm is employed to incorporate continuous phase-shift variables. It shows that the DL method only focuses on the maximization of the instantaneous throughput, whereas the DRL method can coordinate the power consumption over different time slots to maximize the long-term throughput. Numerical results demonstrate that: 1) the proposed QoS-based NOMA clustering scheme achieves higher IoT throughput than the conventional channel-based scheme; 2) the implementation of RISs induces approximately 5%-25% throughput gain as the number of RIS elements increases from 8 to 64; 3) DL and DRL achieve a similar throughput performance for the short-term optimization, while DRL is superior for the long-term optimization, especially when the total transmit power is limited. |
Persistent Identifier | http://hdl.handle.net/10722/349879 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zou, Yixuan | - |
dc.contributor.author | Liu, Yuanwei | - |
dc.contributor.author | Mu, Xidong | - |
dc.contributor.author | Zhang, Xingqi | - |
dc.contributor.author | Liu, Yue | - |
dc.contributor.author | Yuen, Chau | - |
dc.date.accessioned | 2024-10-17T07:01:35Z | - |
dc.date.available | 2024-10-17T07:01:35Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | IEEE Internet of Things Journal, 2023, v. 10, n. 22, p. 19427-19440 | - |
dc.identifier.uri | http://hdl.handle.net/10722/349879 | - |
dc.description.abstract | A reconfigurable intelligent surface (RIS)-assisted downlink nonorthogonal multiple access (NOMA) Internet of Things (IoT) network is proposed, where a Quality-of-Service (QoS)-based NOMA clustering scheme is conceived to effectively utilize the limited wireless resources among IoT devices. A throughput maximization problem is formulated by jointly optimizing the phase shifts of the RIS and the power allocation of the base station (BS) from the short-term and long-term perspectives. We aim to investigate and compare the performance of deep learning (DL) and deep reinforcement learning (DRL) algorithms for solving the formulated problems. In particular, the DL method utilizes model-agnostic-metalearning (MAML) to enhance the generalization capability of the neural network and to accelerate the convergence rate. For the DRL method, the deep deterministic policy gradient (DDPG) algorithm is employed to incorporate continuous phase-shift variables. It shows that the DL method only focuses on the maximization of the instantaneous throughput, whereas the DRL method can coordinate the power consumption over different time slots to maximize the long-term throughput. Numerical results demonstrate that: 1) the proposed QoS-based NOMA clustering scheme achieves higher IoT throughput than the conventional channel-based scheme; 2) the implementation of RISs induces approximately 5%-25% throughput gain as the number of RIS elements increases from 8 to 64; 3) DL and DRL achieve a similar throughput performance for the short-term optimization, while DRL is superior for the long-term optimization, especially when the total transmit power is limited. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Internet of Things Journal | - |
dc.subject | Deep learning (DL) | - |
dc.subject | deep reinforcement learning (DRL) | - |
dc.subject | Internet of Things (IoT) networks | - |
dc.subject | nonorthogonal multiple access (NOMA) | - |
dc.subject | reconfigurable intelligent surfaces (RISs) | - |
dc.title | Machine Learning in RIS-Assisted NOMA IoT Networks | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/JIOT.2023.3245288 | - |
dc.identifier.scopus | eid_2-s2.0-85149425937 | - |
dc.identifier.volume | 10 | - |
dc.identifier.issue | 22 | - |
dc.identifier.spage | 19427 | - |
dc.identifier.epage | 19440 | - |
dc.identifier.eissn | 2327-4662 | - |