File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via Diffusion Models
Title | DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via Diffusion Models |
---|---|
Authors | |
Issue Date | 17-Jun-2024 |
Abstract | We present DreamAvatar, a text-and-shape guided framework for generating high-quality 3D human avatars with controllable poses. While encouraging results have been reported by recent methods on text-guided 3D common object generation, generating high-quality human avatars remains an open challenge due to the complexity of the human body’s shape, pose, and appearance. We propose DreamAvatar to tackle this challenge, which utilizes a trainable NeRF for predicting density and color for 3D points and pretrained text-to-image diffusion models for providing 2D self-supervision. Specifically, we leverage the SMPL model to provide shape and pose guidance for the generation. We introduce a dual-observation-space design that involves the joint optimization of a canonical space and a posed space that are related by a learnable deformation field. This facilitates the generation of more complete textures and geometry faithful to the target pose. We also jointly optimize the losses computed from the full body and from the zoomed-in 3D head to alleviate the common multi-face “Janus” problem and improve facial details in the generated avatars. Extensive evaluations demonstrate that DreamAvatar significantly outperforms existing methods, establishing a new state-of-the-art for text-and-shape guided 3D human avatar generation. |
Persistent Identifier | http://hdl.handle.net/10722/345733 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Cao, Yukang | - |
dc.contributor.author | Cao, Yan-Pei | - |
dc.contributor.author | Han, Kai | - |
dc.contributor.author | Shan, Ying | - |
dc.contributor.author | Wong, Kwan-Yee K | - |
dc.date.accessioned | 2024-08-27T09:10:49Z | - |
dc.date.available | 2024-08-27T09:10:49Z | - |
dc.date.issued | 2024-06-17 | - |
dc.identifier.uri | http://hdl.handle.net/10722/345733 | - |
dc.description.abstract | <p>We present DreamAvatar, a text-and-shape guided framework for generating high-quality 3D human avatars with controllable poses. While encouraging results have been reported by recent methods on text-guided 3D common object generation, generating high-quality human avatars remains an open challenge due to the complexity of the human body’s shape, pose, and appearance. We propose DreamAvatar to tackle this challenge, which utilizes a trainable NeRF for predicting density and color for 3D points and pretrained text-to-image diffusion models for providing 2D self-supervision. Specifically, we leverage the SMPL model to provide shape and pose guidance for the generation. We introduce a dual-observation-space design that involves the joint optimization of a canonical space and a posed space that are related by a learnable deformation field. This facilitates the generation of more complete textures and geometry faithful to the target pose. We also jointly optimize the losses computed from the full body and from the zoomed-in 3D head to alleviate the common multi-face “Janus” problem and improve facial details in the generated avatars. Extensive evaluations demonstrate that DreamAvatar significantly outperforms existing methods, establishing a new state-of-the-art for text-and-shape guided 3D human avatar generation.<br></p> | - |
dc.language | eng | - |
dc.relation.ispartof | 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (17/06/2024-21/06/2024) | - |
dc.title | DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via Diffusion Models | - |
dc.type | Conference_Paper | - |
dc.identifier.spage | 958 | - |
dc.identifier.epage | 968 | - |