File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Controllable Mesh Generation Through Sparse Latent Point Diffusion Models

TitleControllable Mesh Generation Through Sparse Latent Point Diffusion Models
Authors
Keywords3D from multi-view and sensors
Issue Date2023
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2023, v. 2023-June, p. 271-280 How to Cite?
AbstractMesh generation is of great value in various applications involving computer graphics and virtual content, yet designing generative models for meshes is challenging due to their irregular data structure and inconsistent topology of meshes in the same category. In this work, we design a novel sparse latent point diffusion model for mesh generation. Our key insight is to regard point clouds as an intermediate representation of meshes, and model the distribution of point clouds instead. While meshes can be generated from point clouds via techniques like Shape as Points (SAP), the challenges of directly generating meshes can be effectively avoided. To boost the efficiency and controllability of our mesh generation method, we propose to further encode point clouds to a set of sparse latent points with pointwise semantic meaningful features, where two DDPMs are trained in the space of sparse latent points to respectively model the distribution of the latent point positions and features at these latent points. We find that sampling in this latent space is faster than directly sampling dense point clouds. Moreover, the sparse latent points also enable us to explicitly control both the overall structures and local details of the generated meshes. Extensive experiments are conducted on the ShapeNet dataset, where our proposed sparse latent point diffusion model achieves superior performance in terms of generation quality and controllability when compared to existing methods. Project page, code and appendix: https://slide-3d.github.io.
Persistent Identifierhttp://hdl.handle.net/10722/352379
ISSN
2023 SCImago Journal Rankings: 10.331

 

DC FieldValueLanguage
dc.contributor.authorLyu, Zhaoyang-
dc.contributor.authorWang, Jinyi-
dc.contributor.authorAn, Yuwei-
dc.contributor.authorZhang, Ya-
dc.contributor.authorLin, Dahua-
dc.contributor.authorDai, Bo-
dc.date.accessioned2024-12-16T03:58:34Z-
dc.date.available2024-12-16T03:58:34Z-
dc.date.issued2023-
dc.identifier.citationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2023, v. 2023-June, p. 271-280-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/352379-
dc.description.abstractMesh generation is of great value in various applications involving computer graphics and virtual content, yet designing generative models for meshes is challenging due to their irregular data structure and inconsistent topology of meshes in the same category. In this work, we design a novel sparse latent point diffusion model for mesh generation. Our key insight is to regard point clouds as an intermediate representation of meshes, and model the distribution of point clouds instead. While meshes can be generated from point clouds via techniques like Shape as Points (SAP), the challenges of directly generating meshes can be effectively avoided. To boost the efficiency and controllability of our mesh generation method, we propose to further encode point clouds to a set of sparse latent points with pointwise semantic meaningful features, where two DDPMs are trained in the space of sparse latent points to respectively model the distribution of the latent point positions and features at these latent points. We find that sampling in this latent space is faster than directly sampling dense point clouds. Moreover, the sparse latent points also enable us to explicitly control both the overall structures and local details of the generated meshes. Extensive experiments are conducted on the ShapeNet dataset, where our proposed sparse latent point diffusion model achieves superior performance in terms of generation quality and controllability when compared to existing methods. Project page, code and appendix: https://slide-3d.github.io.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-
dc.subject3D from multi-view and sensors-
dc.titleControllable Mesh Generation Through Sparse Latent Point Diffusion Models-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR52729.2023.00034-
dc.identifier.scopuseid_2-s2.0-85168349794-
dc.identifier.volume2023-June-
dc.identifier.spage271-
dc.identifier.epage280-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats