File Download

There are no files associated with this item.

Conference Paper: DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via Diffusion Models

TitleDreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via Diffusion Models
Authors
Issue Date17-Jun-2024
Abstract

We present DreamAvatar, a text-and-shape guided framework for generating high-quality 3D human avatars with controllable poses. While encouraging results have been reported by recent methods on text-guided 3D common object generation, generating high-quality human avatars remains an open challenge due to the complexity of the human body’s shape, pose, and appearance. We propose DreamAvatar to tackle this challenge, which utilizes a trainable NeRF for predicting density and color for 3D points and pretrained text-to-image diffusion models for providing 2D self-supervision. Specifically, we leverage the SMPL model to provide shape and pose guidance for the generation. We introduce a dual-observation-space design that involves the joint optimization of a canonical space and a posed space that are related by a learnable deformation field. This facilitates the generation of more complete textures and geometry faithful to the target pose. We also jointly optimize the losses computed from the full body and from the zoomed-in 3D head to alleviate the common multi-face “Janus” problem and improve facial details in the generated avatars. Extensive evaluations demonstrate that DreamAvatar significantly outperforms existing methods, establishing a new state-of-the-art for text-and-shape guided 3D human avatar generation.


Persistent Identifierhttp://hdl.handle.net/10722/345733

 

DC FieldValueLanguage
dc.contributor.authorCao, Yukang-
dc.contributor.authorCao, Yan-Pei-
dc.contributor.authorHan, Kai-
dc.contributor.authorShan, Ying-
dc.contributor.authorWong, Kwan-Yee K-
dc.date.accessioned2024-08-27T09:10:49Z-
dc.date.available2024-08-27T09:10:49Z-
dc.date.issued2024-06-17-
dc.identifier.urihttp://hdl.handle.net/10722/345733-
dc.description.abstract<p>We present DreamAvatar, a text-and-shape guided framework for generating high-quality 3D human avatars with controllable poses. While encouraging results have been reported by recent methods on text-guided 3D common object generation, generating high-quality human avatars remains an open challenge due to the complexity of the human body’s shape, pose, and appearance. We propose DreamAvatar to tackle this challenge, which utilizes a trainable NeRF for predicting density and color for 3D points and pretrained text-to-image diffusion models for providing 2D self-supervision. Specifically, we leverage the SMPL model to provide shape and pose guidance for the generation. We introduce a dual-observation-space design that involves the joint optimization of a canonical space and a posed space that are related by a learnable deformation field. This facilitates the generation of more complete textures and geometry faithful to the target pose. We also jointly optimize the losses computed from the full body and from the zoomed-in 3D head to alleviate the common multi-face “Janus” problem and improve facial details in the generated avatars. Extensive evaluations demonstrate that DreamAvatar significantly outperforms existing methods, establishing a new state-of-the-art for text-and-shape guided 3D human avatar generation.<br></p>-
dc.languageeng-
dc.relation.ispartof2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (17/06/2024-21/06/2024)-
dc.titleDreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via Diffusion Models-
dc.typeConference_Paper-
dc.identifier.spage958-
dc.identifier.epage968-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats