File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Mobile Reconfigurable Intelligent Surfaces for NOMA Networks: Federated Learning Approaches

TitleMobile Reconfigurable Intelligent Surfaces for NOMA Networks: Federated Learning Approaches
Authors
KeywordsDeep reinforcement learning (DRL)
federated learning (FL)
intelligent reflecting surfaces (IRSs)
non-orthogonal multiple access (NOMA)
reconfigurable intelligent surfaces (RIS)
resource management
Issue Date2022
Citation
IEEE Transactions on Wireless Communications, 2022, v. 21, n. 11, p. 10020-10034 How to Cite?
AbstractA novel framework of reconfigurable intelligent surfaces (RISs)-enhanced indoor wireless networks is proposed, where an RIS mounted on the robot is invoked to enable mobility of the RIS and enhance the service quality for mobile users. Meanwhile, non-orthogonal multiple access (NOMA) techniques are adopted to further increase the spectrum efficiency since RISs are capable of providing NOMA with artificial controlled channel conditions, which can be seen as a beneficial operation condition to obtain NOMA gains. To optimize the sum rate of all users, a deep deterministic policy gradient (DDPG) algorithm is invoked to optimize the deployment and phase shifts of the mobile RIS as well as the power allocation policy. In order to improve the efficiency and effectiveness of agent training for the DDPG agents, a federated learning (FL) concept is adopted to enable multiple agents to simultaneously explore similar environments and exchange experiences. We also proved that with the same random exploring policy, the FL armed deep reinforcement learning (DRL) agents can theoretically obtain a reward gain comparing to the independent agents. Our simulation results indicate that the mobile RIS scheme can significantly outperform the fixed RIS paradigm, which provides about three times data rate gain compared to the fixed RIS paradigm. Moreover, the NOMA scheme is capable of achieving a gain of 42% in contrast with the OMA scheme in terms of the sum rate. Finally, the multi-cell simulation proved that the FL enhanced DDPG algorithm has a superior convergence rate and optimization performance than the independent training framework.
Persistent Identifierhttp://hdl.handle.net/10722/349741
ISSN
2023 Impact Factor: 8.9
2023 SCImago Journal Rankings: 5.371

 

DC FieldValueLanguage
dc.contributor.authorZhong, Ruikang-
dc.contributor.authorLiu, Xiao-
dc.contributor.authorLiu, Yuanwei-
dc.contributor.authorChen, Yue-
dc.contributor.authorHan, Zhu-
dc.date.accessioned2024-10-17T07:00:30Z-
dc.date.available2024-10-17T07:00:30Z-
dc.date.issued2022-
dc.identifier.citationIEEE Transactions on Wireless Communications, 2022, v. 21, n. 11, p. 10020-10034-
dc.identifier.issn1536-1276-
dc.identifier.urihttp://hdl.handle.net/10722/349741-
dc.description.abstractA novel framework of reconfigurable intelligent surfaces (RISs)-enhanced indoor wireless networks is proposed, where an RIS mounted on the robot is invoked to enable mobility of the RIS and enhance the service quality for mobile users. Meanwhile, non-orthogonal multiple access (NOMA) techniques are adopted to further increase the spectrum efficiency since RISs are capable of providing NOMA with artificial controlled channel conditions, which can be seen as a beneficial operation condition to obtain NOMA gains. To optimize the sum rate of all users, a deep deterministic policy gradient (DDPG) algorithm is invoked to optimize the deployment and phase shifts of the mobile RIS as well as the power allocation policy. In order to improve the efficiency and effectiveness of agent training for the DDPG agents, a federated learning (FL) concept is adopted to enable multiple agents to simultaneously explore similar environments and exchange experiences. We also proved that with the same random exploring policy, the FL armed deep reinforcement learning (DRL) agents can theoretically obtain a reward gain comparing to the independent agents. Our simulation results indicate that the mobile RIS scheme can significantly outperform the fixed RIS paradigm, which provides about three times data rate gain compared to the fixed RIS paradigm. Moreover, the NOMA scheme is capable of achieving a gain of 42% in contrast with the OMA scheme in terms of the sum rate. Finally, the multi-cell simulation proved that the FL enhanced DDPG algorithm has a superior convergence rate and optimization performance than the independent training framework.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Wireless Communications-
dc.subjectDeep reinforcement learning (DRL)-
dc.subjectfederated learning (FL)-
dc.subjectintelligent reflecting surfaces (IRSs)-
dc.subjectnon-orthogonal multiple access (NOMA)-
dc.subjectreconfigurable intelligent surfaces (RIS)-
dc.subjectresource management-
dc.titleMobile Reconfigurable Intelligent Surfaces for NOMA Networks: Federated Learning Approaches-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TWC.2022.3181747-
dc.identifier.scopuseid_2-s2.0-85132775913-
dc.identifier.volume21-
dc.identifier.issue11-
dc.identifier.spage10020-
dc.identifier.epage10034-
dc.identifier.eissn1558-2248-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats