File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/s11263-025-02496-2
- Scopus: eid_2-s2.0-105007854789
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: E3DGE: Self-Supervised Geometry-Aware Encoder for Style-Based 3D GAN Inversion
| Title | E3DGE: Self-Supervised Geometry-Aware Encoder for Style-Based 3D GAN Inversion |
|---|---|
| Authors | |
| Issue Date | 14-Jun-2025 |
| Publisher | Springer |
| Citation | International Journal of Computer Vision, 2025 How to Cite? |
| Abstract | StyleGAN has excelled in 2D face reconstruction and semantic editing, but the extension to 3D lacks a generic inversion framework, limiting its applications in 3D reconstruction. In this paper, we address the challenge of 3D GAN inversion, focusing on predicting a latent code from a single 2D image to faithfully recover 3D shapes and textures. The inherent ill-posed nature of the problem, coupled with the limited capacity of global latent codes, presents significant challenges. To overcome these challenges, we introduce an efficient self-training scheme that does not rely on real-world 2D-3D pairs but instead utilizes proxy samples generated from a 3D GAN. Additionally, our approach goes beyond the global latent code by enhancing the generation network with a local branch. This branch incorporates pixel-aligned features to accurately reconstruct texture details. Furthermore, we introduce a novel pipeline for 3D view-consistent editing. The efficacy of our method is validated on two representative 3D GANs, namely StyleSDF and EG3D. Through extensive experiments, we demonstrate that our approach consistently outperforms state-of-the-art inversion methods, delivering superior quality in both shape and texture reconstruction. |
| Persistent Identifier | http://hdl.handle.net/10722/358582 |
| ISSN | 2023 Impact Factor: 11.6 2023 SCImago Journal Rankings: 6.668 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Lan, Yushi | - |
| dc.contributor.author | Meng, Xuyi | - |
| dc.contributor.author | Yang, Shuai | - |
| dc.contributor.author | Loy, Chen Change | - |
| dc.contributor.author | Dai, Bo | - |
| dc.date.accessioned | 2025-08-07T00:33:13Z | - |
| dc.date.available | 2025-08-07T00:33:13Z | - |
| dc.date.issued | 2025-06-14 | - |
| dc.identifier.citation | International Journal of Computer Vision, 2025 | - |
| dc.identifier.issn | 0920-5691 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/358582 | - |
| dc.description.abstract | StyleGAN has excelled in 2D face reconstruction and semantic editing, but the extension to 3D lacks a generic inversion framework, limiting its applications in 3D reconstruction. In this paper, we address the challenge of 3D GAN inversion, focusing on predicting a latent code from a single 2D image to faithfully recover 3D shapes and textures. The inherent ill-posed nature of the problem, coupled with the limited capacity of global latent codes, presents significant challenges. To overcome these challenges, we introduce an efficient self-training scheme that does not rely on real-world 2D-3D pairs but instead utilizes proxy samples generated from a 3D GAN. Additionally, our approach goes beyond the global latent code by enhancing the generation network with a local branch. This branch incorporates pixel-aligned features to accurately reconstruct texture details. Furthermore, we introduce a novel pipeline for 3D view-consistent editing. The efficacy of our method is validated on two representative 3D GANs, namely StyleSDF and EG3D. Through extensive experiments, we demonstrate that our approach consistently outperforms state-of-the-art inversion methods, delivering superior quality in both shape and texture reconstruction. | - |
| dc.language | eng | - |
| dc.publisher | Springer | - |
| dc.relation.ispartof | International Journal of Computer Vision | - |
| dc.title | E3DGE: Self-Supervised Geometry-Aware Encoder for Style-Based 3D GAN Inversion | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1007/s11263-025-02496-2 | - |
| dc.identifier.scopus | eid_2-s2.0-105007854789 | - |
| dc.identifier.eissn | 1573-1405 | - |
| dc.identifier.issnl | 0920-5691 | - |
