File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Diffusion Model is Secretly a Training-Free Open Vocabulary Semantic Segmenter

TitleDiffusion Model is Secretly a Training-Free Open Vocabulary Semantic Segmenter
Authors
Keywordsopen-vocabulary
semantic segmentation
Stable diffusion
Issue Date1-Jan-2025
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Image Processing, 2025, v. 34, p. 1895-1907 How to Cite?
Abstract

The pre-trained text-image discriminative models, such as CLIP, has been explored for open-vocabulary semantic segmentation with unsatisfactory results due to the loss of crucial localization information and awareness of object shapes. Recently, there has been a growing interest in expanding the application of generative models from generation tasks to semantic segmentation. These approaches utilize generative models either for generating annotated data or extracting features to facilitate semantic segmentation. This typically involves generating a considerable amount of synthetic data or requiring additional mask annotations. To this end, we uncover the potential of generative text-to-image diffusion models (e.g., Stable Diffusion) as highly efficient open-vocabulary semantic segmenters, and introduce a novel training-free approach named DiffSegmenter. The insight is that to generate realistic objects that are semantically faithful to the input text, both the complete object shapes and the corresponding semantics are implicitly learned by diffusion models. We discover that the object shapes are characterized by the self-attention maps while the semantics are indicated through the cross-attention maps produced by the denoising U-Net, forming the basis of our segmentation results. Additionally, we carefully design effective textual prompts and a category filtering mechanism to further enhance the segmentation results. Extensive experiments on three benchmark datasets show that the proposed DiffSegmenter achieves impressive results for open-vocabulary semantic segmentation.


Persistent Identifierhttp://hdl.handle.net/10722/361934
ISSN
2023 Impact Factor: 10.8
2023 SCImago Journal Rankings: 3.556

 

DC FieldValueLanguage
dc.contributor.authorWang, Jinglong-
dc.contributor.authorLi, Xiawei-
dc.contributor.authorZhang, Jing-
dc.contributor.authorXu, Qingyuan-
dc.contributor.authorZhou, Qin-
dc.contributor.authorYu, Qian-
dc.contributor.authorSheng, Lu-
dc.contributor.authorXu, Dong-
dc.date.accessioned2025-09-17T00:32:09Z-
dc.date.available2025-09-17T00:32:09Z-
dc.date.issued2025-01-01-
dc.identifier.citationIEEE Transactions on Image Processing, 2025, v. 34, p. 1895-1907-
dc.identifier.issn1057-7149-
dc.identifier.urihttp://hdl.handle.net/10722/361934-
dc.description.abstract<p>The pre-trained text-image discriminative models, such as CLIP, has been explored for open-vocabulary semantic segmentation with unsatisfactory results due to the loss of crucial localization information and awareness of object shapes. Recently, there has been a growing interest in expanding the application of generative models from generation tasks to semantic segmentation. These approaches utilize generative models either for generating annotated data or extracting features to facilitate semantic segmentation. This typically involves generating a considerable amount of synthetic data or requiring additional mask annotations. To this end, we uncover the potential of generative text-to-image diffusion models (e.g., Stable Diffusion) as highly efficient open-vocabulary semantic segmenters, and introduce a novel training-free approach named DiffSegmenter. The insight is that to generate realistic objects that are semantically faithful to the input text, both the complete object shapes and the corresponding semantics are implicitly learned by diffusion models. We discover that the object shapes are characterized by the self-attention maps while the semantics are indicated through the cross-attention maps produced by the denoising U-Net, forming the basis of our segmentation results. Additionally, we carefully design effective textual prompts and a category filtering mechanism to further enhance the segmentation results. Extensive experiments on three benchmark datasets show that the proposed DiffSegmenter achieves impressive results for open-vocabulary semantic segmentation.</p>-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Image Processing-
dc.subjectopen-vocabulary-
dc.subjectsemantic segmentation-
dc.subjectStable diffusion-
dc.titleDiffusion Model is Secretly a Training-Free Open Vocabulary Semantic Segmenter-
dc.typeArticle-
dc.identifier.doi10.1109/TIP.2025.3551648-
dc.identifier.pmid40126966-
dc.identifier.scopuseid_2-s2.0-105002269723-
dc.identifier.volume34-
dc.identifier.spage1895-
dc.identifier.epage1907-
dc.identifier.eissn1941-0042-
dc.identifier.issnl1057-7149-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats