File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Scene-aware Generative Network for Human Motion Synthesis

TitleScene-aware Generative Network for Human Motion Synthesis
Authors
Issue Date2021
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2021, p. 12201-12210 How to Cite?
AbstractWe revisit human motion synthesis, a task useful in various real-world applications, in this paper. Whereas a number of methods have been developed previously for this task, they are often limited in two aspects: 1) focus on the poses while leaving the location movement behind, and 2) ignore the impact of the environment on the human motion. In this paper, we propose a new framework, with the interaction between the scene and the human motion taken into account. Considering the uncertainty of human motion, we formulate this task as a generative task, whose objective is to generate plausible human motion conditioned on both the scene and the human's initial position. This framework factorizes the distribution of human motions into a distribution of movement trajectories conditioned on scenes and that of body pose dynamics conditioned on both scenes and trajectories. We further derive a GAN-based learning approach, with discriminators to enforce the compatibility between the human motion and the contextual scene as well as the 3D-to-2D projection constraints. We assess the effectiveness of the proposed method on two challenging datasets, which cover both synthetic and real-world environments.
Persistent Identifierhttp://hdl.handle.net/10722/352267
ISSN
2023 SCImago Journal Rankings: 10.331
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorWang, Jingbo-
dc.contributor.authorYan, Sijie-
dc.contributor.authorDai, Bo-
dc.contributor.authorLin, Dahua-
dc.date.accessioned2024-12-16T03:57:41Z-
dc.date.available2024-12-16T03:57:41Z-
dc.date.issued2021-
dc.identifier.citationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2021, p. 12201-12210-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/352267-
dc.description.abstractWe revisit human motion synthesis, a task useful in various real-world applications, in this paper. Whereas a number of methods have been developed previously for this task, they are often limited in two aspects: 1) focus on the poses while leaving the location movement behind, and 2) ignore the impact of the environment on the human motion. In this paper, we propose a new framework, with the interaction between the scene and the human motion taken into account. Considering the uncertainty of human motion, we formulate this task as a generative task, whose objective is to generate plausible human motion conditioned on both the scene and the human's initial position. This framework factorizes the distribution of human motions into a distribution of movement trajectories conditioned on scenes and that of body pose dynamics conditioned on both scenes and trajectories. We further derive a GAN-based learning approach, with discriminators to enforce the compatibility between the human motion and the contextual scene as well as the 3D-to-2D projection constraints. We assess the effectiveness of the proposed method on two challenging datasets, which cover both synthetic and real-world environments.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-
dc.titleScene-aware Generative Network for Human Motion Synthesis-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR46437.2021.01203-
dc.identifier.scopuseid_2-s2.0-85123218429-
dc.identifier.spage12201-
dc.identifier.epage12210-
dc.identifier.isiWOS:000742075002040-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats