File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/CVPR.2019.01024
- Scopus: eid_2-s2.0-85078724907
- WOS: WOS:000542649303062
- Find via
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Towards natural and accurate future motion prediction of humans and animals
Title | Towards natural and accurate future motion prediction of humans and animals |
---|---|
Authors | |
Keywords | Deep Learning Motion and Tracking |
Issue Date | 2019 |
Citation | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019, v. 2019-June, p. 9996-10004 How to Cite? |
Abstract | Anticipating the future motions of 3D articulate objects is challenging due to its non-linear and highly stochastic nature. Current approaches typically represent the skeleton of an articulate object as a set of 3D joints, which unfortunately ignores the relationship between joints, and fails to encode fine-grained anatomical constraints. Moreover, conventional recurrent neural networks, such as LSTM and GRU, are employed to model motion contexts, which inherently have difficulties in capturing long-term dependencies. To address these problems, we propose to explicitly encode anatomical constraints by modeling their skeletons with a Lie algebra representation. Importantly, a hierarchical recurrent network structure is developed to simultaneously encodes local contexts of individual frames and global contexts of the sequence. We proceed to explore the applications of our approach to several distinct quantities including human, fish, and mouse. Extensive experiments show that our approach achieves more natural and accurate predictions over state-of-the-art methods. |
Persistent Identifier | http://hdl.handle.net/10722/321875 |
ISSN | 2023 SCImago Journal Rankings: 10.331 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Liu, Zhenguang | - |
dc.contributor.author | Wu, Shuang | - |
dc.contributor.author | Jin, Shuyuan | - |
dc.contributor.author | Liu, Qi | - |
dc.contributor.author | Lu, Shijian | - |
dc.contributor.author | Zimmermann, Roger | - |
dc.contributor.author | Cheng, Li | - |
dc.date.accessioned | 2022-11-03T02:22:03Z | - |
dc.date.available | 2022-11-03T02:22:03Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019, v. 2019-June, p. 9996-10004 | - |
dc.identifier.issn | 1063-6919 | - |
dc.identifier.uri | http://hdl.handle.net/10722/321875 | - |
dc.description.abstract | Anticipating the future motions of 3D articulate objects is challenging due to its non-linear and highly stochastic nature. Current approaches typically represent the skeleton of an articulate object as a set of 3D joints, which unfortunately ignores the relationship between joints, and fails to encode fine-grained anatomical constraints. Moreover, conventional recurrent neural networks, such as LSTM and GRU, are employed to model motion contexts, which inherently have difficulties in capturing long-term dependencies. To address these problems, we propose to explicitly encode anatomical constraints by modeling their skeletons with a Lie algebra representation. Importantly, a hierarchical recurrent network structure is developed to simultaneously encodes local contexts of individual frames and global contexts of the sequence. We proceed to explore the applications of our approach to several distinct quantities including human, fish, and mouse. Extensive experiments show that our approach achieves more natural and accurate predictions over state-of-the-art methods. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | - |
dc.subject | Deep Learning | - |
dc.subject | Motion and Tracking | - |
dc.title | Towards natural and accurate future motion prediction of humans and animals | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/CVPR.2019.01024 | - |
dc.identifier.scopus | eid_2-s2.0-85078724907 | - |
dc.identifier.volume | 2019-June | - |
dc.identifier.spage | 9996 | - |
dc.identifier.epage | 10004 | - |
dc.identifier.isi | WOS:000542649303062 | - |