File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: Flow-based Recurrent Belief State Learning for POMDPs
Title | Flow-based Recurrent Belief State Learning for POMDPs |
---|---|
Authors | |
Issue Date | 2022 |
Publisher | Absci AI Research (AAIR) Lab. |
Citation | 39th International Conference on Machine Learning (ICML), Baltimore, Maryland, USA, 17-23 July 2022. In Proceedings of the 39th International Conference on Machine Learning, v. 162, p. 3444-3468 How to Cite? |
Abstract | Partially Observable Markov Decision Process (POMDP) provides a principled and generic framework to model real world sequential decision making processes but yet remains unsolved, especially for high dimensional continuous space and unknown models. The main challenge lies in how to accurately obtain the belief state, which is the probability distribution over the unobservable environment states given historical information. Accurately calculating this belief state is a precondition for obtaining an optimal policy of POMDPs. Recent advances in deep learning techniques show great potential to learn good belief states. However, existing methods can only learn approximated distribution with limited flexibility. In this paper, we introduce the extbf{F}l extbf{O}w-based extbf{R}ecurrent extbf{BE}lief extbf{S}tate model (FORBES), which incorporates normalizing flows into the variational inference to learn general continuous belief states for POMDPs. Furthermore, we show that the learned belief states can be plugged into downstream RL algorithms to improve performance. In experiments, we show that our methods successfully capture the complex belief states that enable multi-modal predictions as well as high quality reconstructions, and results on challenging visual-motor control tasks show that our method achieves superior performance and sample efficiency. |
Persistent Identifier | http://hdl.handle.net/10722/315551 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chen, X | - |
dc.contributor.author | Mu, Y | - |
dc.contributor.author | Luo, P | - |
dc.contributor.author | Chen, J | - |
dc.date.accessioned | 2022-08-19T09:00:00Z | - |
dc.date.available | 2022-08-19T09:00:00Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | 39th International Conference on Machine Learning (ICML), Baltimore, Maryland, USA, 17-23 July 2022. In Proceedings of the 39th International Conference on Machine Learning, v. 162, p. 3444-3468 | - |
dc.identifier.uri | http://hdl.handle.net/10722/315551 | - |
dc.description.abstract | Partially Observable Markov Decision Process (POMDP) provides a principled and generic framework to model real world sequential decision making processes but yet remains unsolved, especially for high dimensional continuous space and unknown models. The main challenge lies in how to accurately obtain the belief state, which is the probability distribution over the unobservable environment states given historical information. Accurately calculating this belief state is a precondition for obtaining an optimal policy of POMDPs. Recent advances in deep learning techniques show great potential to learn good belief states. However, existing methods can only learn approximated distribution with limited flexibility. In this paper, we introduce the extbf{F}l extbf{O}w-based extbf{R}ecurrent extbf{BE}lief extbf{S}tate model (FORBES), which incorporates normalizing flows into the variational inference to learn general continuous belief states for POMDPs. Furthermore, we show that the learned belief states can be plugged into downstream RL algorithms to improve performance. In experiments, we show that our methods successfully capture the complex belief states that enable multi-modal predictions as well as high quality reconstructions, and results on challenging visual-motor control tasks show that our method achieves superior performance and sample efficiency. | - |
dc.language | eng | - |
dc.publisher | Absci AI Research (AAIR) Lab. | - |
dc.relation.ispartof | Proceedings of the 39th International Conference on Machine Learning | - |
dc.title | Flow-based Recurrent Belief State Learning for POMDPs | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Luo, P: pluo@hku.hk | - |
dc.identifier.authority | Luo, P=rp02575 | - |
dc.identifier.hkuros | 335563 | - |
dc.identifier.volume | 162 | - |
dc.identifier.spage | 3444 | - |
dc.identifier.epage | 3468 | - |
dc.publisher.place | United States | - |