File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: TEXGen: A Generative Diffusion Model for Mesh Textures

TitleTEXGen: A Generative Diffusion Model for Mesh Textures
Authors
Keywordsgenerative model
texture generation
Issue Date19-Nov-2024
PublisherAssociation for Computing Machinery (ACM)
Citation
ACM Transactions on Graphics, 2024, v. 43, n. 6, p. 1-14 How to Cite?
AbstractWhile high-quality texture maps are essential for realistic 3D asset rendering, few studies have explored learning directly in the texture space, especially on large-scale datasets. In this work, we depart from the conventional approach of relying on pre-trained 2D diffusion models for testtime optimization of 3D textures. Instead, we focus on the fundamental problem of learning in the UV texture space itself. For the first time, we train a large diffusion model capable of directly generating high-resolution texture maps in a feed-forward manner. To facilitate efficient learning in high-resolution UV spaces, we propose a scalable network architecture that interleaves convolutions on UV maps with attention layers on point clouds. Leveraging this architectural design, we train a 700 million parameter diffusion model that can generate UV texture maps guided by text prompts and single-view images. Once trained, our model naturally supports various extended applications, including text-guided texture inpainting, sparse-view texture completion, and text-driven texture synthesis. The code is available at https://github.com/CVMI-Lab/TEXGen.
Persistent Identifierhttp://hdl.handle.net/10722/367157
ISSN
2023 Impact Factor: 7.8
2023 SCImago Journal Rankings: 7.766

 

DC FieldValueLanguage
dc.contributor.authorYu, Xin-
dc.contributor.authorYuan, Ze-
dc.contributor.authorGuo, Yuan Chen-
dc.contributor.authorLiu, Ying Tian-
dc.contributor.authorLiu, Jianhui-
dc.contributor.authorLi, Yangguang-
dc.contributor.authorCao, Yan Pei-
dc.contributor.authorLiang, Ding-
dc.contributor.authorQi, Xiaojuan-
dc.date.accessioned2025-12-05T00:45:19Z-
dc.date.available2025-12-05T00:45:19Z-
dc.date.issued2024-11-19-
dc.identifier.citationACM Transactions on Graphics, 2024, v. 43, n. 6, p. 1-14-
dc.identifier.issn0730-0301-
dc.identifier.urihttp://hdl.handle.net/10722/367157-
dc.description.abstractWhile high-quality texture maps are essential for realistic 3D asset rendering, few studies have explored learning directly in the texture space, especially on large-scale datasets. In this work, we depart from the conventional approach of relying on pre-trained 2D diffusion models for testtime optimization of 3D textures. Instead, we focus on the fundamental problem of learning in the UV texture space itself. For the first time, we train a large diffusion model capable of directly generating high-resolution texture maps in a feed-forward manner. To facilitate efficient learning in high-resolution UV spaces, we propose a scalable network architecture that interleaves convolutions on UV maps with attention layers on point clouds. Leveraging this architectural design, we train a 700 million parameter diffusion model that can generate UV texture maps guided by text prompts and single-view images. Once trained, our model naturally supports various extended applications, including text-guided texture inpainting, sparse-view texture completion, and text-driven texture synthesis. The code is available at https://github.com/CVMI-Lab/TEXGen.-
dc.languageeng-
dc.publisherAssociation for Computing Machinery (ACM)-
dc.relation.ispartofACM Transactions on Graphics-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectgenerative model-
dc.subjecttexture generation-
dc.titleTEXGen: A Generative Diffusion Model for Mesh Textures-
dc.typeArticle-
dc.identifier.doi10.1145/3687909-
dc.identifier.scopuseid_2-s2.0-85210081918-
dc.identifier.volume43-
dc.identifier.issue6-
dc.identifier.spage1-
dc.identifier.epage14-
dc.identifier.eissn1557-7368-
dc.identifier.issnl0730-0301-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats