File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TPWRS.2024.3469132
- Scopus: eid_2-s2.0-85205285027
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Distributed Attention-Enabled Multi-Agent Reinforcement Learning Based Frequency Regulation of Power Systems
| Title | Distributed Attention-Enabled Multi-Agent Reinforcement Learning Based Frequency Regulation of Power Systems |
|---|---|
| Authors | |
| Keywords | Distributed attention-enabled multi-agent reinforcement learning frequency regulation |
| Issue Date | 1-Sep-2024 |
| Publisher | Institute of Electrical and Electronics Engineers |
| Citation | IEEE Transactions on Power Systems, 2024, v. 40, n. 3, p. 2427-2437 How to Cite? |
| Abstract | This paper develops a new distributed attention-enabled multi-agent reinforcement learning method for frequency regulation of power systems. Specifically, the controller of each generator is modelled as an agent, and the reward and observation are designed based on the characteristics of power systems. All the agents learn their own control policies in the offline training phase and generate frequency control signals in the online execution phase. The target of the proposed algorithm is to conduct both offline training and online frequency control in a distributed way. To achieve this goal, two distributed information-sharing mechanisms are proposed based on the different global information to be discovered. First, a consensus-based reward-sharing mechanism is designed to estimate the globally averaged reward. Second, a distributed observation-sharing scheme is developed to discover the global observation information. Furthermore, the attention strategy is embedded in the observation-sharing scheme to help agents adaptively adjust the importance of observations from different neighbors. With these two mechanisms, a new distributed attention-enabled proximal policy optimization (DAPPO) based method is proposed to achieve model-free frequency control. Simulation results on the IEEE 39-bus system and the NPCC 140-bus system demonstrate that the proposed DAPPO achieves stable offline training and effective online frequency control. |
| Persistent Identifier | http://hdl.handle.net/10722/362817 |
| ISSN | 2023 Impact Factor: 6.5 2023 SCImago Journal Rankings: 3.827 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Zhao, Yunzheng | - |
| dc.contributor.author | Liu, Tao | - |
| dc.contributor.author | Hill, David J. | - |
| dc.date.accessioned | 2025-10-01T00:35:27Z | - |
| dc.date.available | 2025-10-01T00:35:27Z | - |
| dc.date.issued | 2024-09-01 | - |
| dc.identifier.citation | IEEE Transactions on Power Systems, 2024, v. 40, n. 3, p. 2427-2437 | - |
| dc.identifier.issn | 0885-8950 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/362817 | - |
| dc.description.abstract | This paper develops a new distributed attention-enabled multi-agent reinforcement learning method for frequency regulation of power systems. Specifically, the controller of each generator is modelled as an agent, and the reward and observation are designed based on the characteristics of power systems. All the agents learn their own control policies in the offline training phase and generate frequency control signals in the online execution phase. The target of the proposed algorithm is to conduct both offline training and online frequency control in a distributed way. To achieve this goal, two distributed information-sharing mechanisms are proposed based on the different global information to be discovered. First, a consensus-based reward-sharing mechanism is designed to estimate the globally averaged reward. Second, a distributed observation-sharing scheme is developed to discover the global observation information. Furthermore, the attention strategy is embedded in the observation-sharing scheme to help agents adaptively adjust the importance of observations from different neighbors. With these two mechanisms, a new distributed attention-enabled proximal policy optimization (DAPPO) based method is proposed to achieve model-free frequency control. Simulation results on the IEEE 39-bus system and the NPCC 140-bus system demonstrate that the proposed DAPPO achieves stable offline training and effective online frequency control. | - |
| dc.language | eng | - |
| dc.publisher | Institute of Electrical and Electronics Engineers | - |
| dc.relation.ispartof | IEEE Transactions on Power Systems | - |
| dc.subject | Distributed attention-enabled multi-agent reinforcement learning | - |
| dc.subject | frequency regulation | - |
| dc.title | Distributed Attention-Enabled Multi-Agent Reinforcement Learning Based Frequency Regulation of Power Systems | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1109/TPWRS.2024.3469132 | - |
| dc.identifier.scopus | eid_2-s2.0-85205285027 | - |
| dc.identifier.volume | 40 | - |
| dc.identifier.issue | 3 | - |
| dc.identifier.spage | 2427 | - |
| dc.identifier.epage | 2437 | - |
| dc.identifier.eissn | 1558-0679 | - |
| dc.identifier.issnl | 0885-8950 | - |
