File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: E3DGE: Self-Supervised Geometry-Aware Encoder for Style-Based 3D GAN Inversion

TitleE3DGE: Self-Supervised Geometry-Aware Encoder for Style-Based 3D GAN Inversion
Authors
Issue Date14-Jun-2025
PublisherSpringer
Citation
International Journal of Computer Vision, 2025 How to Cite?
AbstractStyleGAN has excelled in 2D face reconstruction and semantic editing, but the extension to 3D lacks a generic inversion framework, limiting its applications in 3D reconstruction. In this paper, we address the challenge of 3D GAN inversion, focusing on predicting a latent code from a single 2D image to faithfully recover 3D shapes and textures. The inherent ill-posed nature of the problem, coupled with the limited capacity of global latent codes, presents significant challenges. To overcome these challenges, we introduce an efficient self-training scheme that does not rely on real-world 2D-3D pairs but instead utilizes proxy samples generated from a 3D GAN. Additionally, our approach goes beyond the global latent code by enhancing the generation network with a local branch. This branch incorporates pixel-aligned features to accurately reconstruct texture details. Furthermore, we introduce a novel pipeline for 3D view-consistent editing. The efficacy of our method is validated on two representative 3D GANs, namely StyleSDF and EG3D. Through extensive experiments, we demonstrate that our approach consistently outperforms state-of-the-art inversion methods, delivering superior quality in both shape and texture reconstruction.
Persistent Identifierhttp://hdl.handle.net/10722/358582
ISSN
2023 Impact Factor: 11.6
2023 SCImago Journal Rankings: 6.668

 

DC FieldValueLanguage
dc.contributor.authorLan, Yushi-
dc.contributor.authorMeng, Xuyi-
dc.contributor.authorYang, Shuai-
dc.contributor.authorLoy, Chen Change-
dc.contributor.authorDai, Bo-
dc.date.accessioned2025-08-07T00:33:13Z-
dc.date.available2025-08-07T00:33:13Z-
dc.date.issued2025-06-14-
dc.identifier.citationInternational Journal of Computer Vision, 2025-
dc.identifier.issn0920-5691-
dc.identifier.urihttp://hdl.handle.net/10722/358582-
dc.description.abstractStyleGAN has excelled in 2D face reconstruction and semantic editing, but the extension to 3D lacks a generic inversion framework, limiting its applications in 3D reconstruction. In this paper, we address the challenge of 3D GAN inversion, focusing on predicting a latent code from a single 2D image to faithfully recover 3D shapes and textures. The inherent ill-posed nature of the problem, coupled with the limited capacity of global latent codes, presents significant challenges. To overcome these challenges, we introduce an efficient self-training scheme that does not rely on real-world 2D-3D pairs but instead utilizes proxy samples generated from a 3D GAN. Additionally, our approach goes beyond the global latent code by enhancing the generation network with a local branch. This branch incorporates pixel-aligned features to accurately reconstruct texture details. Furthermore, we introduce a novel pipeline for 3D view-consistent editing. The efficacy of our method is validated on two representative 3D GANs, namely StyleSDF and EG3D. Through extensive experiments, we demonstrate that our approach consistently outperforms state-of-the-art inversion methods, delivering superior quality in both shape and texture reconstruction.-
dc.languageeng-
dc.publisherSpringer-
dc.relation.ispartofInternational Journal of Computer Vision-
dc.titleE3DGE: Self-Supervised Geometry-Aware Encoder for Style-Based 3D GAN Inversion-
dc.typeArticle-
dc.identifier.doi10.1007/s11263-025-02496-2-
dc.identifier.scopuseid_2-s2.0-105007854789-
dc.identifier.eissn1573-1405-
dc.identifier.issnl0920-5691-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats