File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: 3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping

Title3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping
Authors
Issue Date2023
Citation
Proceedings of the IEEE International Conference on Computer Vision, 2023, p. 22951-22962 How to Cite?
AbstractWe present 3DHumanGAN, a 3D-aware generative adversarial network that synthesizes photo-like images of full-body humans with consistent appearances under different view-angles and body-poses. To tackle the representational and computational challenges in synthesizing the articulated structure of human bodies, we propose a novel generator architecture in which a 2D convolutional backbone is modulated by a 3D pose mapping network. The 3D pose mapping network is formulated as a renderable implicit function conditioned on a posed 3D human mesh. This design has several merits: i) it leverages the strength of 2D GANs to produce high-quality images; ii) it generates consistent images under varying view-angles and poses; iii) the model can incorporate the 3D human prior and enable pose conditioning. Our model is adversarially learned from a collection of web images needless of manual annotation.
Persistent Identifierhttp://hdl.handle.net/10722/352399
ISSN
2023 SCImago Journal Rankings: 12.263

 

DC FieldValueLanguage
dc.contributor.authorYang, Zhuoqian-
dc.contributor.authorLi, Shikai-
dc.contributor.authorWu, Wayne-
dc.contributor.authorDai, Bo-
dc.date.accessioned2024-12-16T03:58:42Z-
dc.date.available2024-12-16T03:58:42Z-
dc.date.issued2023-
dc.identifier.citationProceedings of the IEEE International Conference on Computer Vision, 2023, p. 22951-22962-
dc.identifier.issn1550-5499-
dc.identifier.urihttp://hdl.handle.net/10722/352399-
dc.description.abstractWe present 3DHumanGAN, a 3D-aware generative adversarial network that synthesizes photo-like images of full-body humans with consistent appearances under different view-angles and body-poses. To tackle the representational and computational challenges in synthesizing the articulated structure of human bodies, we propose a novel generator architecture in which a 2D convolutional backbone is modulated by a 3D pose mapping network. The 3D pose mapping network is formulated as a renderable implicit function conditioned on a posed 3D human mesh. This design has several merits: i) it leverages the strength of 2D GANs to produce high-quality images; ii) it generates consistent images under varying view-angles and poses; iii) the model can incorporate the 3D human prior and enable pose conditioning. Our model is adversarially learned from a collection of web images needless of manual annotation.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE International Conference on Computer Vision-
dc.title3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICCV51070.2023.02103-
dc.identifier.scopuseid_2-s2.0-85182550037-
dc.identifier.spage22951-
dc.identifier.epage22962-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats