File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/ICCV51070.2023.02103
- Scopus: eid_2-s2.0-85182550037
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: 3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping
Title | 3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping |
---|---|
Authors | |
Issue Date | 2023 |
Citation | Proceedings of the IEEE International Conference on Computer Vision, 2023, p. 22951-22962 How to Cite? |
Abstract | We present 3DHumanGAN, a 3D-aware generative adversarial network that synthesizes photo-like images of full-body humans with consistent appearances under different view-angles and body-poses. To tackle the representational and computational challenges in synthesizing the articulated structure of human bodies, we propose a novel generator architecture in which a 2D convolutional backbone is modulated by a 3D pose mapping network. The 3D pose mapping network is formulated as a renderable implicit function conditioned on a posed 3D human mesh. This design has several merits: i) it leverages the strength of 2D GANs to produce high-quality images; ii) it generates consistent images under varying view-angles and poses; iii) the model can incorporate the 3D human prior and enable pose conditioning. Our model is adversarially learned from a collection of web images needless of manual annotation. |
Persistent Identifier | http://hdl.handle.net/10722/352399 |
ISSN | 2023 SCImago Journal Rankings: 12.263 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yang, Zhuoqian | - |
dc.contributor.author | Li, Shikai | - |
dc.contributor.author | Wu, Wayne | - |
dc.contributor.author | Dai, Bo | - |
dc.date.accessioned | 2024-12-16T03:58:42Z | - |
dc.date.available | 2024-12-16T03:58:42Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | Proceedings of the IEEE International Conference on Computer Vision, 2023, p. 22951-22962 | - |
dc.identifier.issn | 1550-5499 | - |
dc.identifier.uri | http://hdl.handle.net/10722/352399 | - |
dc.description.abstract | We present 3DHumanGAN, a 3D-aware generative adversarial network that synthesizes photo-like images of full-body humans with consistent appearances under different view-angles and body-poses. To tackle the representational and computational challenges in synthesizing the articulated structure of human bodies, we propose a novel generator architecture in which a 2D convolutional backbone is modulated by a 3D pose mapping network. The 3D pose mapping network is formulated as a renderable implicit function conditioned on a posed 3D human mesh. This design has several merits: i) it leverages the strength of 2D GANs to produce high-quality images; ii) it generates consistent images under varying view-angles and poses; iii) the model can incorporate the 3D human prior and enable pose conditioning. Our model is adversarially learned from a collection of web images needless of manual annotation. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the IEEE International Conference on Computer Vision | - |
dc.title | 3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/ICCV51070.2023.02103 | - |
dc.identifier.scopus | eid_2-s2.0-85182550037 | - |
dc.identifier.spage | 22951 | - |
dc.identifier.epage | 22962 | - |