File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1016/j.neucom.2024.127618
- Scopus: eid_2-s2.0-85189857546
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Learning-based stabilization of Markov jump linear systems
Title | Learning-based stabilization of Markov jump linear systems |
---|---|
Authors | |
Keywords | Markov jump linear systems Stabilization Stochastic gradient descent Stochastic systems |
Issue Date | 14-Jun-2024 |
Publisher | Elsevier |
Citation | Neurocomputing, 2024, v. 586 How to Cite? |
Abstract | In this paper, we explore the stabilization problem of discrete-time Markov jump linear systems from a new perspective. We establish a novel learning-based framework that combines control theory and learning methods to design stabilizing feedback gains. Firstly, we reformulate the stabilization problems for discrete-time Markov jump linear systems into finite-time counterparts. Subsequently, leveraging techniques from the field of learning, we effectively and efficiently solve the finite-time stabilization problems. We systematically investigate two typical stabilization problems of discrete-time Markov jump linear systems within the proposed framework, namely the detector-based feedback stabilization and the static output feedback stabilization problems. Extensive simulation on various numerical examples demonstrates the advantages of our approach over several existing methods for discrete-time Markov jump linear systems. |
Persistent Identifier | http://hdl.handle.net/10722/351785 |
ISSN | 2023 Impact Factor: 5.5 2023 SCImago Journal Rankings: 1.815 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Liu, Jason JR | - |
dc.contributor.author | Ogura, Masaki | - |
dc.contributor.author | Li, Qiyu | - |
dc.contributor.author | Lam, James | - |
dc.date.accessioned | 2024-11-29T00:35:10Z | - |
dc.date.available | 2024-11-29T00:35:10Z | - |
dc.date.issued | 2024-06-14 | - |
dc.identifier.citation | Neurocomputing, 2024, v. 586 | - |
dc.identifier.issn | 0925-2312 | - |
dc.identifier.uri | http://hdl.handle.net/10722/351785 | - |
dc.description.abstract | In this paper, we explore the stabilization problem of discrete-time Markov jump linear systems from a new perspective. We establish a novel learning-based framework that combines control theory and learning methods to design stabilizing feedback gains. Firstly, we reformulate the stabilization problems for discrete-time Markov jump linear systems into finite-time counterparts. Subsequently, leveraging techniques from the field of learning, we effectively and efficiently solve the finite-time stabilization problems. We systematically investigate two typical stabilization problems of discrete-time Markov jump linear systems within the proposed framework, namely the detector-based feedback stabilization and the static output feedback stabilization problems. Extensive simulation on various numerical examples demonstrates the advantages of our approach over several existing methods for discrete-time Markov jump linear systems. | - |
dc.language | eng | - |
dc.publisher | Elsevier | - |
dc.relation.ispartof | Neurocomputing | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject | Markov jump linear systems | - |
dc.subject | Stabilization | - |
dc.subject | Stochastic gradient descent | - |
dc.subject | Stochastic systems | - |
dc.title | Learning-based stabilization of Markov jump linear systems | - |
dc.type | Article | - |
dc.identifier.doi | 10.1016/j.neucom.2024.127618 | - |
dc.identifier.scopus | eid_2-s2.0-85189857546 | - |
dc.identifier.volume | 586 | - |
dc.identifier.eissn | 1872-8286 | - |
dc.identifier.issnl | 0925-2312 | - |