File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Neural Rendering and Reenactment of Human Actor Videos

TitleNeural Rendering and Reenactment of Human Actor Videos
Authors
KeywordsConditional GAN
Deep learning
Neural rendering
Rendering-to-video translation
Video-based characters
Issue Date2019
PublisherAssociation for Computing Machinery, Inc. The Journal's web site is located at http://tog.acm.org
Citation
ACM Transactions on Graphics, 2019, v. 38 n. 5, p. article no. 139 How to Cite?
AbstractWe propose a method for generating video-realistic animations of real humans under user control. In contrast to conventional human character rendering, we do not require the availability of a production-quality photo-realistic three-dimensional (3D) model of the human but instead rely on a video sequence in conjunction with a (medium-quality) controllable 3D template model of the person. With that, our approach significantly reduces production cost compared to conventional rendering approaches based on production-quality 3D models and can also be used to realistically edit existing videos. Technically, this is achieved by training a neural network that translates simple synthetic images of a human character into realistic imagery. For training our networks, we first track the 3D motion of the person in the video using the template model and subsequently generate a synthetically rendered version of the video. These images are then used to train a conditional generative adversarial network that translates synthetic images of the 3D model into realistic imagery of the human. We evaluate our method for the reenactment of another person that is tracked to obtain the motion data, and show video results generated from artist-designed skeleton motion. Our results outperform the state of the art in learning-based human image synthesis.
Persistent Identifierhttp://hdl.handle.net/10722/293927
ISSN
2021 Impact Factor: 7.403
2020 SCImago Journal Rankings: 2.153
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLIU, L-
dc.contributor.authorXu, W-
dc.contributor.authorZollhöfer, M-
dc.contributor.authorKim, H-
dc.contributor.authorBernard, F-
dc.contributor.authorHabermann, M-
dc.contributor.authorWang, W-
dc.contributor.authorTheobalt, C-
dc.date.accessioned2020-11-23T08:23:51Z-
dc.date.available2020-11-23T08:23:51Z-
dc.date.issued2019-
dc.identifier.citationACM Transactions on Graphics, 2019, v. 38 n. 5, p. article no. 139-
dc.identifier.issn0730-0301-
dc.identifier.urihttp://hdl.handle.net/10722/293927-
dc.description.abstractWe propose a method for generating video-realistic animations of real humans under user control. In contrast to conventional human character rendering, we do not require the availability of a production-quality photo-realistic three-dimensional (3D) model of the human but instead rely on a video sequence in conjunction with a (medium-quality) controllable 3D template model of the person. With that, our approach significantly reduces production cost compared to conventional rendering approaches based on production-quality 3D models and can also be used to realistically edit existing videos. Technically, this is achieved by training a neural network that translates simple synthetic images of a human character into realistic imagery. For training our networks, we first track the 3D motion of the person in the video using the template model and subsequently generate a synthetically rendered version of the video. These images are then used to train a conditional generative adversarial network that translates synthetic images of the 3D model into realistic imagery of the human. We evaluate our method for the reenactment of another person that is tracked to obtain the motion data, and show video results generated from artist-designed skeleton motion. Our results outperform the state of the art in learning-based human image synthesis.-
dc.languageeng-
dc.publisherAssociation for Computing Machinery, Inc. The Journal's web site is located at http://tog.acm.org-
dc.relation.ispartofACM Transactions on Graphics-
dc.rightsACM Transactions on Graphics. Copyright © Association for Computing Machinery, Inc.-
dc.rights©ACM, YYYY. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in PUBLICATION, {VOL#, ISS#, (DATE)} http://doi.acm.org/10.1145/nnnnnn.nnnnnn-
dc.subjectConditional GAN-
dc.subjectDeep learning-
dc.subjectNeural rendering-
dc.subjectRendering-to-video translation-
dc.subjectVideo-based characters-
dc.titleNeural Rendering and Reenactment of Human Actor Videos-
dc.typeArticle-
dc.identifier.emailWang, W: wenping@cs.hku.hk-
dc.identifier.authorityWang, W=rp00186-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3333002-
dc.identifier.scopuseid_2-s2.0-85074443058-
dc.identifier.hkuros319198-
dc.identifier.volume38-
dc.identifier.issue5-
dc.identifier.spagearticle no. 139-
dc.identifier.epagearticle no. 139-
dc.identifier.isiWOS:000494271400002-
dc.publisher.placeUnited States-
dc.identifier.issnl0730-0301-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats