File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/978-3-031-73235-5_7
- Scopus: eid_2-s2.0-85206363881
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: LN3Diff: Scalable Latent Neural Fields Diffusion for Speedy 3D Generation
Title | LN3Diff: Scalable Latent Neural Fields Diffusion for Speedy 3D Generation |
---|---|
Authors | |
Keywords | Generative Model Latent Diffusion Model Reconstruction |
Issue Date | 2025 |
Citation | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2025, v. 15062 LNCS, p. 112-130 How to Cite? |
Abstract | The field of neural rendering has witnessed significant progress with advancements in generative models and differentiable rendering techniques. Though 2D diffusion has achieved success, a unified 3D diffusion pipeline remains unsettled. This paper introduces a novel framework called LN3Diff to address this gap and enable fast, high-quality, and generic conditional 3D generation. Our approach harnesses a 3D-aware architecture and variational autoencoder (VAE) to encode the input image(s) into a structured, compact, and 3D latent space. The latent is decoded by a transformer-based decoder into a high-capacity 3D neural field. Through training a diffusion model on this 3D-aware latent space, our method achieves superior performance on Objaverse, ShapeNet and FFHQ for conditional 3D generation. Moreover, it surpasses existing 3D diffusion methods in terms of inference speed, requiring no per-instance optimization. Video demos can be found on our project webpage: https://nirvanalan.github.io/projects/ln3diff. |
Persistent Identifier | http://hdl.handle.net/10722/352479 |
ISSN | 2023 SCImago Journal Rankings: 0.606 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lan, Yushi | - |
dc.contributor.author | Hong, Fangzhou | - |
dc.contributor.author | Yang, Shuai | - |
dc.contributor.author | Zhou, Shangchen | - |
dc.contributor.author | Meng, Xuyi | - |
dc.contributor.author | Dai, Bo | - |
dc.contributor.author | Pan, Xingang | - |
dc.contributor.author | Loy, Chen Change | - |
dc.date.accessioned | 2024-12-16T03:59:20Z | - |
dc.date.available | 2024-12-16T03:59:20Z | - |
dc.date.issued | 2025 | - |
dc.identifier.citation | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2025, v. 15062 LNCS, p. 112-130 | - |
dc.identifier.issn | 0302-9743 | - |
dc.identifier.uri | http://hdl.handle.net/10722/352479 | - |
dc.description.abstract | The field of neural rendering has witnessed significant progress with advancements in generative models and differentiable rendering techniques. Though 2D diffusion has achieved success, a unified 3D diffusion pipeline remains unsettled. This paper introduces a novel framework called LN3Diff to address this gap and enable fast, high-quality, and generic conditional 3D generation. Our approach harnesses a 3D-aware architecture and variational autoencoder (VAE) to encode the input image(s) into a structured, compact, and 3D latent space. The latent is decoded by a transformer-based decoder into a high-capacity 3D neural field. Through training a diffusion model on this 3D-aware latent space, our method achieves superior performance on Objaverse, ShapeNet and FFHQ for conditional 3D generation. Moreover, it surpasses existing 3D diffusion methods in terms of inference speed, requiring no per-instance optimization. Video demos can be found on our project webpage: https://nirvanalan.github.io/projects/ln3diff. | - |
dc.language | eng | - |
dc.relation.ispartof | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | - |
dc.subject | Generative Model | - |
dc.subject | Latent Diffusion Model | - |
dc.subject | Reconstruction | - |
dc.title | LN3Diff: Scalable Latent Neural Fields Diffusion for Speedy 3D Generation | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1007/978-3-031-73235-5_7 | - |
dc.identifier.scopus | eid_2-s2.0-85206363881 | - |
dc.identifier.volume | 15062 LNCS | - |
dc.identifier.spage | 112 | - |
dc.identifier.epage | 130 | - |
dc.identifier.eissn | 1611-3349 | - |