File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: PLACE: Adaptive Layout-Semantic Fusion for Semantic Image Synthesis
Title | PLACE: Adaptive Layout-Semantic Fusion for Semantic Image Synthesis |
---|---|
Authors | |
Issue Date | 17-Jun-2024 |
Abstract | Recent advancements in large-scale pre-trained textto-image models have led to remarkable progress in semantic image synthesis. Nevertheless, synthesizing highquality images with consistent semantics and layout remains a challenge. In this paper, we propose the adaPtive LAyout-semantiC fusion modulE (PLACE) that harnesses pre-trained models to alleviate the aforementioned issues. Specifically, we first employ the layout control map to faithfully represent layouts in the feature space. Subsequently, we combine the layout and semantic features in a timestep-adaptive manner to synthesize images with realistic details. During fine-tuning, we propose the Semantic Alignment (SA) loss to further enhance layout alignment. Additionally, we introduce the Layout-Free Prior Preservation (LFP) loss, which leverages unlabeled data to maintain the priors of pre-trained models, thereby improving the visual quality and semantic consistency of synthesized images. Extensive experiments demonstrate that our approach performs favorably in terms of visual quality, semantic consistency, and layout alignment. The source code and model are available at https://github.com/cszy98/PLACE |
Persistent Identifier | http://hdl.handle.net/10722/345750 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lv, Zhengyao | - |
dc.contributor.author | Wei, Yuxiang | - |
dc.contributor.author | Zuo, Wangmeng | - |
dc.contributor.author | Wong, Kwan-Yee K | - |
dc.date.accessioned | 2024-08-27T09:10:56Z | - |
dc.date.available | 2024-08-27T09:10:56Z | - |
dc.date.issued | 2024-06-17 | - |
dc.identifier.uri | http://hdl.handle.net/10722/345750 | - |
dc.description.abstract | <p>Recent advancements in large-scale pre-trained textto-image models have led to remarkable progress in semantic image synthesis. Nevertheless, synthesizing highquality images with consistent semantics and layout remains a challenge. In this paper, we propose the adaPtive LAyout-semantiC fusion modulE (PLACE) that harnesses pre-trained models to alleviate the aforementioned issues. Specifically, we first employ the layout control map to faithfully represent layouts in the feature space. Subsequently, we combine the layout and semantic features in a timestep-adaptive manner to synthesize images with realistic details. During fine-tuning, we propose the Semantic Alignment (SA) loss to further enhance layout alignment. Additionally, we introduce the Layout-Free Prior Preservation (LFP) loss, which leverages unlabeled data to maintain the priors of pre-trained models, thereby improving the visual quality and semantic consistency of synthesized images. Extensive experiments demonstrate that our approach performs favorably in terms of visual quality, semantic consistency, and layout alignment. The source code and model are available at https://github.com/cszy98/PLACE<br></p> | - |
dc.language | eng | - |
dc.relation.ispartof | 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (17/06/2024-21/06/2024) | - |
dc.title | PLACE: Adaptive Layout-Semantic Fusion for Semantic Image Synthesis | - |
dc.type | Conference_Paper | - |
dc.identifier.spage | 9264 | - |
dc.identifier.epage | 9274 | - |