File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TWC.2022.3181747
- Scopus: eid_2-s2.0-85132775913
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Mobile Reconfigurable Intelligent Surfaces for NOMA Networks: Federated Learning Approaches
Title | Mobile Reconfigurable Intelligent Surfaces for NOMA Networks: Federated Learning Approaches |
---|---|
Authors | |
Keywords | Deep reinforcement learning (DRL) federated learning (FL) intelligent reflecting surfaces (IRSs) non-orthogonal multiple access (NOMA) reconfigurable intelligent surfaces (RIS) resource management |
Issue Date | 2022 |
Citation | IEEE Transactions on Wireless Communications, 2022, v. 21, n. 11, p. 10020-10034 How to Cite? |
Abstract | A novel framework of reconfigurable intelligent surfaces (RISs)-enhanced indoor wireless networks is proposed, where an RIS mounted on the robot is invoked to enable mobility of the RIS and enhance the service quality for mobile users. Meanwhile, non-orthogonal multiple access (NOMA) techniques are adopted to further increase the spectrum efficiency since RISs are capable of providing NOMA with artificial controlled channel conditions, which can be seen as a beneficial operation condition to obtain NOMA gains. To optimize the sum rate of all users, a deep deterministic policy gradient (DDPG) algorithm is invoked to optimize the deployment and phase shifts of the mobile RIS as well as the power allocation policy. In order to improve the efficiency and effectiveness of agent training for the DDPG agents, a federated learning (FL) concept is adopted to enable multiple agents to simultaneously explore similar environments and exchange experiences. We also proved that with the same random exploring policy, the FL armed deep reinforcement learning (DRL) agents can theoretically obtain a reward gain comparing to the independent agents. Our simulation results indicate that the mobile RIS scheme can significantly outperform the fixed RIS paradigm, which provides about three times data rate gain compared to the fixed RIS paradigm. Moreover, the NOMA scheme is capable of achieving a gain of 42% in contrast with the OMA scheme in terms of the sum rate. Finally, the multi-cell simulation proved that the FL enhanced DDPG algorithm has a superior convergence rate and optimization performance than the independent training framework. |
Persistent Identifier | http://hdl.handle.net/10722/349741 |
ISSN | 2023 Impact Factor: 8.9 2023 SCImago Journal Rankings: 5.371 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhong, Ruikang | - |
dc.contributor.author | Liu, Xiao | - |
dc.contributor.author | Liu, Yuanwei | - |
dc.contributor.author | Chen, Yue | - |
dc.contributor.author | Han, Zhu | - |
dc.date.accessioned | 2024-10-17T07:00:30Z | - |
dc.date.available | 2024-10-17T07:00:30Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | IEEE Transactions on Wireless Communications, 2022, v. 21, n. 11, p. 10020-10034 | - |
dc.identifier.issn | 1536-1276 | - |
dc.identifier.uri | http://hdl.handle.net/10722/349741 | - |
dc.description.abstract | A novel framework of reconfigurable intelligent surfaces (RISs)-enhanced indoor wireless networks is proposed, where an RIS mounted on the robot is invoked to enable mobility of the RIS and enhance the service quality for mobile users. Meanwhile, non-orthogonal multiple access (NOMA) techniques are adopted to further increase the spectrum efficiency since RISs are capable of providing NOMA with artificial controlled channel conditions, which can be seen as a beneficial operation condition to obtain NOMA gains. To optimize the sum rate of all users, a deep deterministic policy gradient (DDPG) algorithm is invoked to optimize the deployment and phase shifts of the mobile RIS as well as the power allocation policy. In order to improve the efficiency and effectiveness of agent training for the DDPG agents, a federated learning (FL) concept is adopted to enable multiple agents to simultaneously explore similar environments and exchange experiences. We also proved that with the same random exploring policy, the FL armed deep reinforcement learning (DRL) agents can theoretically obtain a reward gain comparing to the independent agents. Our simulation results indicate that the mobile RIS scheme can significantly outperform the fixed RIS paradigm, which provides about three times data rate gain compared to the fixed RIS paradigm. Moreover, the NOMA scheme is capable of achieving a gain of 42% in contrast with the OMA scheme in terms of the sum rate. Finally, the multi-cell simulation proved that the FL enhanced DDPG algorithm has a superior convergence rate and optimization performance than the independent training framework. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Wireless Communications | - |
dc.subject | Deep reinforcement learning (DRL) | - |
dc.subject | federated learning (FL) | - |
dc.subject | intelligent reflecting surfaces (IRSs) | - |
dc.subject | non-orthogonal multiple access (NOMA) | - |
dc.subject | reconfigurable intelligent surfaces (RIS) | - |
dc.subject | resource management | - |
dc.title | Mobile Reconfigurable Intelligent Surfaces for NOMA Networks: Federated Learning Approaches | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TWC.2022.3181747 | - |
dc.identifier.scopus | eid_2-s2.0-85132775913 | - |
dc.identifier.volume | 21 | - |
dc.identifier.issue | 11 | - |
dc.identifier.spage | 10020 | - |
dc.identifier.epage | 10034 | - |
dc.identifier.eissn | 1558-2248 | - |