File Download

There are no files associated with this item.

Supplementary

Conference Paper: ConceptExpress: Harnessing Diffusion Models for Single-image Unsupervised Concept Extraction

TitleConceptExpress: Harnessing Diffusion Models for Single-image Unsupervised Concept Extraction
Authors
Issue Date29-Sep-2024
Abstract

While personalized text-to-image generation has enabled the learning of a single concept from multiple images, a more practical yet challenging scenario involves learning multiple concepts within a single image. However, existing works tackling this scenario heavily rely on extensive human annotations. In this paper, we introduce a novel task named Unsupervised Concept Extraction (UCE) that considers an unsupervised setting without any human knowledge of the concepts. Given an image that contains multiple concepts, the task aims to extract and recreate individual concepts solely relying on the existing knowledge from pretrained diffusion models. To achieve this, we present ConceptExpress that tackles UCE by unleashing the inherent capabilities of pretrained diffusion models in two aspects. Specifically, a concept localization approach automatically locates and disentangles salient concepts by leveraging spatial correspondence from diffusion self-attention; and based on the lookup association between a concept and a conceptual token, a concept-wise optimization process learns discriminative tokens that represent each individual concept. Finally, we establish an evaluation protocol tailored for the UCE task. Extensive experiments demonstrate that ConceptExpress is a promising solution to the UCE task. Our code and data are available at: https://github.com/haoosz/ConceptExpress


Persistent Identifierhttp://hdl.handle.net/10722/354538

 

DC FieldValueLanguage
dc.contributor.authorHao, Shaozhe-
dc.contributor.authorHan, Kai-
dc.contributor.authorLv, Zhengyao-
dc.contributor.authorZhao, Shihao-
dc.contributor.authorWong, Kwan-Yee K.-
dc.date.accessioned2025-02-13T00:35:11Z-
dc.date.available2025-02-13T00:35:11Z-
dc.date.issued2024-09-29-
dc.identifier.urihttp://hdl.handle.net/10722/354538-
dc.description.abstract<p>While personalized text-to-image generation has enabled the learning of a single concept from multiple images, a more practical yet challenging scenario involves learning multiple concepts within a single image. However, existing works tackling this scenario heavily rely on extensive human annotations. In this paper, we introduce a novel task named Unsupervised Concept Extraction (UCE) that considers an unsupervised setting without any human knowledge of the concepts. Given an image that contains multiple concepts, the task aims to extract and recreate individual concepts solely relying on the existing knowledge from pretrained diffusion models. To achieve this, we present ConceptExpress that tackles UCE by unleashing the inherent capabilities of pretrained diffusion models in two aspects. Specifically, a concept localization approach automatically locates and disentangles salient concepts by leveraging spatial correspondence from diffusion self-attention; and based on the lookup association between a concept and a conceptual token, a concept-wise optimization process learns discriminative tokens that represent each individual concept. Finally, we establish an evaluation protocol tailored for the UCE task. Extensive experiments demonstrate that ConceptExpress is a promising solution to the UCE task. Our code and data are available at: https://github.com/haoosz/ConceptExpress<br></p>-
dc.languageeng-
dc.relation.ispartofThe 18th European Conference on Computer Vision - ECCV 2024 (29/09/2024-04/10/2024, Milan)-
dc.titleConceptExpress: Harnessing Diffusion Models for Single-image Unsupervised Concept Extraction-
dc.typeConference_Paper-
dc.identifier.volume59-
dc.identifier.spage215-
dc.identifier.epage233-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats