File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Safe reinforcement learning for real-time automatic control in a smart energy-hub

TitleSafe reinforcement learning for real-time automatic control in a smart energy-hub
Authors
Issue Date2022
PublisherELSEVIER. The Journal's web site is located at http://www.elsevier.com/locate/apenergy
Citation
Applied Energy, 2022, v. 309, p. 118403 How to Cite?
AbstractNowadays, multi-energy systems are receiving special attention from smart grid community owing to their high flexibility potentials integrating with multiple energy carriers. In this regard, energy hub is known as a flexible and efficient platform to supply energy demands with an acceptable range of affordability and reliability by relying on various energy production, storage and conversion facilities. Given the increasing penetration of renewable energy sources to promote a low-carbon energy transition, accurate economic and environmental assessment of energy hub, along with the real-time automatic energy management scheme has become a challenging task due to the high variability of renewable energy sources. Furthermore, the conventional model-based optimization approach requiring full knowledge of the employed mathematical operating models and accurate uncertainty distributions may become impractical for real-world applications. In this context, this paper proposes a model-free safe deep reinforcement learning method for the optimal control of a renewable-based energy hub operating in multiple energy carries while satisfying the physical constraints within the energy hub operation model. The main objective of this work is to minimize the system energy cost and carbon emission by considering various energy components. The proposed deep reinforcement learning method is trained and tested on a real-world dataset to validate its superior performance in reducing energy cost, carbon emission, and computational time with respect to the state-of-the-art deep reinforcement learning and optimized-based approaches. Moreover, the effectiveness of the proposed method in dealing with model operation constraints is evaluated on both training and test environments. Finally, the generalization performance for the learnt energy management scheme as well as the sensitivity analysis on storage flexibility and carbon price are also examined in the case studies.
Persistent Identifierhttp://hdl.handle.net/10722/322126
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorQiu, D-
dc.contributor.authorDong, Z-
dc.contributor.authorZhang, X-
dc.contributor.authorWang, Y-
dc.contributor.authorStrbac, G-
dc.date.accessioned2022-11-14T08:14:41Z-
dc.date.available2022-11-14T08:14:41Z-
dc.date.issued2022-
dc.identifier.citationApplied Energy, 2022, v. 309, p. 118403-
dc.identifier.urihttp://hdl.handle.net/10722/322126-
dc.description.abstractNowadays, multi-energy systems are receiving special attention from smart grid community owing to their high flexibility potentials integrating with multiple energy carriers. In this regard, energy hub is known as a flexible and efficient platform to supply energy demands with an acceptable range of affordability and reliability by relying on various energy production, storage and conversion facilities. Given the increasing penetration of renewable energy sources to promote a low-carbon energy transition, accurate economic and environmental assessment of energy hub, along with the real-time automatic energy management scheme has become a challenging task due to the high variability of renewable energy sources. Furthermore, the conventional model-based optimization approach requiring full knowledge of the employed mathematical operating models and accurate uncertainty distributions may become impractical for real-world applications. In this context, this paper proposes a model-free safe deep reinforcement learning method for the optimal control of a renewable-based energy hub operating in multiple energy carries while satisfying the physical constraints within the energy hub operation model. The main objective of this work is to minimize the system energy cost and carbon emission by considering various energy components. The proposed deep reinforcement learning method is trained and tested on a real-world dataset to validate its superior performance in reducing energy cost, carbon emission, and computational time with respect to the state-of-the-art deep reinforcement learning and optimized-based approaches. Moreover, the effectiveness of the proposed method in dealing with model operation constraints is evaluated on both training and test environments. Finally, the generalization performance for the learnt energy management scheme as well as the sensitivity analysis on storage flexibility and carbon price are also examined in the case studies.-
dc.languageeng-
dc.publisherELSEVIER. The Journal's web site is located at http://www.elsevier.com/locate/apenergy-
dc.relation.ispartofApplied Energy-
dc.titleSafe reinforcement learning for real-time automatic control in a smart energy-hub-
dc.typeArticle-
dc.identifier.emailWang, Y: yiwang@eee.hku.hk-
dc.identifier.authorityWang, Y=rp02900-
dc.identifier.doi10.1016/j.apenergy.2021.118403-
dc.identifier.hkuros341369-
dc.identifier.volume309-
dc.identifier.spage118403-
dc.identifier.epage118403-
dc.identifier.isiWOS:000817213100001-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats