File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Investigating Pose Representations and Motion Contexts Modeling for 3D Motion Prediction

TitleInvestigating Pose Representations and Motion Contexts Modeling for 3D Motion Prediction
Authors
KeywordsContext modeling
Joints
kinematic chain
Kinematics
Mice
motion context
Motion prediction
pose representation
Predictive models
recurrent neural network
Task analysis
Three-dimensional displays
Issue Date2022
Citation
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022 How to Cite?
AbstractPredicting human motion from historical pose sequence is crucial for a machine to succeed in intelligent interactions with humans. One aspect that has been obviated so far, is the fact that how we represent the skeletal pose has a critical impact on the prediction results. Yet there is no effort that investigates across different pose representation schemes. We conduct an indepth study on various pose representations with a focus on their effects on the motion prediction task. Moreover, recent approaches build upon off-the-shelf RNN units for motion prediction. These approaches process input pose sequence sequentially and inherently have difficulties in capturing long-term dependencies. In this paper, we propose a novel RNN architecture termed AHMR for motion prediction which simultaneously models local motion contexts and a global context. We further explore a geodesic loss and a forward kinematics loss, which have more geometric significance than the widely employed L2 loss. Interestingly, we applied our method to a range of articulate objects including human, fish, and mouse. Empirical results show that our approach outperforms the state-of-the-art methods in short-term prediction and achieves much enhanced long-term prediction proficiency, such as retaining natural human-like motions over 50 seconds predictions. Our codes are released.
Persistent Identifierhttp://hdl.handle.net/10722/321977
ISSN
2023 Impact Factor: 20.8
2023 SCImago Journal Rankings: 6.158
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLiu, Zhenguang-
dc.contributor.authorWu, Shuang-
dc.contributor.authorJin, Shuyuan-
dc.contributor.authorLiu, Qi-
dc.contributor.authorJi, Shouling-
dc.contributor.authorLu, Shijian-
dc.contributor.authorCheng, Li-
dc.date.accessioned2022-11-03T02:22:45Z-
dc.date.available2022-11-03T02:22:45Z-
dc.date.issued2022-
dc.identifier.citationIEEE Transactions on Pattern Analysis and Machine Intelligence, 2022-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10722/321977-
dc.description.abstractPredicting human motion from historical pose sequence is crucial for a machine to succeed in intelligent interactions with humans. One aspect that has been obviated so far, is the fact that how we represent the skeletal pose has a critical impact on the prediction results. Yet there is no effort that investigates across different pose representation schemes. We conduct an indepth study on various pose representations with a focus on their effects on the motion prediction task. Moreover, recent approaches build upon off-the-shelf RNN units for motion prediction. These approaches process input pose sequence sequentially and inherently have difficulties in capturing long-term dependencies. In this paper, we propose a novel RNN architecture termed AHMR for motion prediction which simultaneously models local motion contexts and a global context. We further explore a geodesic loss and a forward kinematics loss, which have more geometric significance than the widely employed L2 loss. Interestingly, we applied our method to a range of articulate objects including human, fish, and mouse. Empirical results show that our approach outperforms the state-of-the-art methods in short-term prediction and achieves much enhanced long-term prediction proficiency, such as retaining natural human-like motions over 50 seconds predictions. Our codes are released.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Pattern Analysis and Machine Intelligence-
dc.subjectContext modeling-
dc.subjectJoints-
dc.subjectkinematic chain-
dc.subjectKinematics-
dc.subjectMice-
dc.subjectmotion context-
dc.subjectMotion prediction-
dc.subjectpose representation-
dc.subjectPredictive models-
dc.subjectrecurrent neural network-
dc.subjectTask analysis-
dc.subjectThree-dimensional displays-
dc.titleInvestigating Pose Representations and Motion Contexts Modeling for 3D Motion Prediction-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TPAMI.2021.3139918-
dc.identifier.scopuseid_2-s2.0-85122574743-
dc.identifier.eissn1939-3539-
dc.identifier.isiWOS:000899419900042-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats