File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3687909
- Scopus: eid_2-s2.0-85210081918
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: TEXGen: A Generative Diffusion Model for Mesh Textures
| Title | TEXGen: A Generative Diffusion Model for Mesh Textures |
|---|---|
| Authors | |
| Keywords | generative model texture generation |
| Issue Date | 19-Nov-2024 |
| Publisher | Association for Computing Machinery (ACM) |
| Citation | ACM Transactions on Graphics, 2024, v. 43, n. 6, p. 1-14 How to Cite? |
| Abstract | While high-quality texture maps are essential for realistic 3D asset rendering, few studies have explored learning directly in the texture space, especially on large-scale datasets. In this work, we depart from the conventional approach of relying on pre-trained 2D diffusion models for testtime optimization of 3D textures. Instead, we focus on the fundamental problem of learning in the UV texture space itself. For the first time, we train a large diffusion model capable of directly generating high-resolution texture maps in a feed-forward manner. To facilitate efficient learning in high-resolution UV spaces, we propose a scalable network architecture that interleaves convolutions on UV maps with attention layers on point clouds. Leveraging this architectural design, we train a 700 million parameter diffusion model that can generate UV texture maps guided by text prompts and single-view images. Once trained, our model naturally supports various extended applications, including text-guided texture inpainting, sparse-view texture completion, and text-driven texture synthesis. The code is available at https://github.com/CVMI-Lab/TEXGen. |
| Persistent Identifier | http://hdl.handle.net/10722/367157 |
| ISSN | 2023 Impact Factor: 7.8 2023 SCImago Journal Rankings: 7.766 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Yu, Xin | - |
| dc.contributor.author | Yuan, Ze | - |
| dc.contributor.author | Guo, Yuan Chen | - |
| dc.contributor.author | Liu, Ying Tian | - |
| dc.contributor.author | Liu, Jianhui | - |
| dc.contributor.author | Li, Yangguang | - |
| dc.contributor.author | Cao, Yan Pei | - |
| dc.contributor.author | Liang, Ding | - |
| dc.contributor.author | Qi, Xiaojuan | - |
| dc.date.accessioned | 2025-12-05T00:45:19Z | - |
| dc.date.available | 2025-12-05T00:45:19Z | - |
| dc.date.issued | 2024-11-19 | - |
| dc.identifier.citation | ACM Transactions on Graphics, 2024, v. 43, n. 6, p. 1-14 | - |
| dc.identifier.issn | 0730-0301 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/367157 | - |
| dc.description.abstract | While high-quality texture maps are essential for realistic 3D asset rendering, few studies have explored learning directly in the texture space, especially on large-scale datasets. In this work, we depart from the conventional approach of relying on pre-trained 2D diffusion models for testtime optimization of 3D textures. Instead, we focus on the fundamental problem of learning in the UV texture space itself. For the first time, we train a large diffusion model capable of directly generating high-resolution texture maps in a feed-forward manner. To facilitate efficient learning in high-resolution UV spaces, we propose a scalable network architecture that interleaves convolutions on UV maps with attention layers on point clouds. Leveraging this architectural design, we train a 700 million parameter diffusion model that can generate UV texture maps guided by text prompts and single-view images. Once trained, our model naturally supports various extended applications, including text-guided texture inpainting, sparse-view texture completion, and text-driven texture synthesis. The code is available at https://github.com/CVMI-Lab/TEXGen. | - |
| dc.language | eng | - |
| dc.publisher | Association for Computing Machinery (ACM) | - |
| dc.relation.ispartof | ACM Transactions on Graphics | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | generative model | - |
| dc.subject | texture generation | - |
| dc.title | TEXGen: A Generative Diffusion Model for Mesh Textures | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1145/3687909 | - |
| dc.identifier.scopus | eid_2-s2.0-85210081918 | - |
| dc.identifier.volume | 43 | - |
| dc.identifier.issue | 6 | - |
| dc.identifier.spage | 1 | - |
| dc.identifier.epage | 14 | - |
| dc.identifier.eissn | 1557-7368 | - |
| dc.identifier.issnl | 0730-0301 | - |
