File Download

There are no files associated with this item.

Supplementary

Conference Paper: ArtiFade: Learning to Generate High-quality Subject from Blemished Image

TitleArtiFade: Learning to Generate High-quality Subject from Blemished Image
Authors
Issue Date11-Jun-2025
Abstract

Subject-driven text-to-image generation has demonstrated remarkable advancements in its ability to learn and capture characteristics of a subject using only a limited number of images. However, existing methods commonly rely on high-quality images for training and often struggle to generate reasonable images when the input images are blemished by artifacts. This is primarily attributed to the inadequate capability of current techniques in distinguishing subject-related features from disruptive artifacts. In this paper, we introduce ArtiFade to tackle this issue and successfully generate high-quality artifact-free images from blemished datasets. Specifically, ArtiFade exploits fine-tuning of a pre-trained text-to-image model, aiming to remove artifacts. The elimination of artifacts is achieved by utilizing a specialized dataset that encompasses both unblemished images and their corresponding blemished counterparts during fine-tuning. ArtiFade also ensures the preservation of the original generative capabilities inherent within the diffusion model, thereby enhancing the overall performance of subject-driven methods in generating high-quality and artifact-free images. We further devise evaluation benchmarks tailored for this task. Through extensive qualitative and quantitative experiments, we demonstrate the generalizability of ArtiFade in effective artifact removal under both in-distribution and out-of-distribution scenarios.


Persistent Identifierhttp://hdl.handle.net/10722/359554

 

DC FieldValueLanguage
dc.contributor.authorYang, Shuya-
dc.contributor.authorHao, Shaozhe-
dc.contributor.authorCao, Yukang-
dc.contributor.authorWong, Kwan-Yee, K-
dc.date.accessioned2025-09-07T00:31:03Z-
dc.date.available2025-09-07T00:31:03Z-
dc.date.issued2025-06-11-
dc.identifier.urihttp://hdl.handle.net/10722/359554-
dc.description.abstract<p>Subject-driven text-to-image generation has demonstrated remarkable advancements in its ability to learn and capture characteristics of a subject using only a limited number of images. However, existing methods commonly rely on high-quality images for training and often struggle to generate reasonable images when the input images are blemished by artifacts. This is primarily attributed to the inadequate capability of current techniques in distinguishing subject-related features from disruptive artifacts. In this paper, we introduce ArtiFade to tackle this issue and successfully generate high-quality artifact-free images from blemished datasets. Specifically, ArtiFade exploits fine-tuning of a pre-trained text-to-image model, aiming to remove artifacts. The elimination of artifacts is achieved by utilizing a specialized dataset that encompasses both unblemished images and their corresponding blemished counterparts during fine-tuning. ArtiFade also ensures the preservation of the original generative capabilities inherent within the diffusion model, thereby enhancing the overall performance of subject-driven methods in generating high-quality and artifact-free images. We further devise evaluation benchmarks tailored for this task. Through extensive qualitative and quantitative experiments, we demonstrate the generalizability of ArtiFade in effective artifact removal under both in-distribution and out-of-distribution scenarios.<br></p>-
dc.languageeng-
dc.relation.ispartofComputer Vision and Pattern Recognition (CVPR) 2025 (11/06/2025-15/06/2025, Nashville)-
dc.titleArtiFade: Learning to Generate High-quality Subject from Blemished Image-
dc.typeConference_Paper-
dc.identifier.spage13167-
dc.identifier.epage13177-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats