File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: From Synthetic to One-Shot Regression of Camera-Agnostic Human Performances

TitleFrom Synthetic to One-Shot Regression of Camera-Agnostic Human Performances
Authors
KeywordsHuman performance
Monocular video
Synthetic data
Issue Date2022
PublisherSpringer.
Citation
Third International Conference on Pattern Recognition and Artificial Intelligence (ICPRAI), Paris, France, June 1–3, 2022. In Pattern Recognition and Artificial Intelligence:Third International Conference, ICPRAI 2022, Paris, France, June 1–3, 2022. Proceedings, Part I, p. 514-525 How to Cite?
AbstractCapturing accurate 3D human performances in global space from a static monocular video is an ill-posed problem. It requires solving various depth ambiguities and information about the camera’s intrinsics and extrinsics. Therefore, most methods either learn on given cameras or require to know the camera’s parameters. We instead show that a camera’s extrinsics and intrinsics can be regressed jointly with human’s position in global space, joint angles and body shape only from long sequences of 2D motion estimates. We exploit a static camera’s constant parameters by training a model that can be applied to sequences with arbitrary length with only a single forward pass while allowing full bidirectional information flow. We show that full temporal information flow is especially necessary when improving consistency through an adversarial network. Our training dataset is exclusively synthetic, and no domain adaptation is used. We achieve one of the best Human3.6M joint’s error performances for models that do not use the Human3.6M training data.
DescriptionLecture Notes in Computer Science book series ; volume 13363
Persistent Identifierhttp://hdl.handle.net/10722/321056

 

DC FieldValueLanguage
dc.contributor.authorHabekost, J-
dc.contributor.authorPang, K-
dc.contributor.authorShiratori, T-
dc.contributor.authorKomura, T-
dc.date.accessioned2022-11-01T04:46:09Z-
dc.date.available2022-11-01T04:46:09Z-
dc.date.issued2022-
dc.identifier.citationThird International Conference on Pattern Recognition and Artificial Intelligence (ICPRAI), Paris, France, June 1–3, 2022. In Pattern Recognition and Artificial Intelligence:Third International Conference, ICPRAI 2022, Paris, France, June 1–3, 2022. Proceedings, Part I, p. 514-525-
dc.identifier.urihttp://hdl.handle.net/10722/321056-
dc.descriptionLecture Notes in Computer Science book series ; volume 13363-
dc.description.abstractCapturing accurate 3D human performances in global space from a static monocular video is an ill-posed problem. It requires solving various depth ambiguities and information about the camera’s intrinsics and extrinsics. Therefore, most methods either learn on given cameras or require to know the camera’s parameters. We instead show that a camera’s extrinsics and intrinsics can be regressed jointly with human’s position in global space, joint angles and body shape only from long sequences of 2D motion estimates. We exploit a static camera’s constant parameters by training a model that can be applied to sequences with arbitrary length with only a single forward pass while allowing full bidirectional information flow. We show that full temporal information flow is especially necessary when improving consistency through an adversarial network. Our training dataset is exclusively synthetic, and no domain adaptation is used. We achieve one of the best Human3.6M joint’s error performances for models that do not use the Human3.6M training data.-
dc.languageeng-
dc.publisherSpringer.-
dc.relation.ispartofPattern Recognition and Artificial Intelligence:Third International Conference, ICPRAI 2022, Paris, France, June 1–3, 2022. Proceedings, Part I-
dc.subjectHuman performance-
dc.subjectMonocular video-
dc.subjectSynthetic data-
dc.titleFrom Synthetic to One-Shot Regression of Camera-Agnostic Human Performances-
dc.typeConference_Paper-
dc.identifier.emailKomura, T: taku@cs.hku.hk-
dc.identifier.authorityKomura, T=rp02741-
dc.identifier.doi10.1007/978-3-031-09037-0_42-
dc.identifier.hkuros340628-
dc.identifier.spage514-
dc.identifier.epage525-
dc.publisher.placeCham, Germany-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats