File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/CVPR52688.2022.00753
- Scopus: eid_2-s2.0-85140197551
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: TransEditor: Transformer-Based Dual-Space GAN for Highly Controllable Facial Editing
Title | TransEditor: Transformer-Based Dual-Space GAN for Highly Controllable Facial Editing |
---|---|
Authors | |
Keywords | Deep learning architectures and techniques Face and gestures Image and video synthesis and generation |
Issue Date | 2022 |
Citation | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2022, v. 2022-June, p. 7673-7682 How to Cite? |
Abstract | Recent advances like StyleGAN have promoted the growth of controllable facial editing. To address its core challenge of attribute decoupling in a single latent space, attempts have been made to adopt dual-space GAN for better disentanglement of style and content representations. Nonetheless, these methods are still incompetent to obtain plausible editing results with high controllability, especially for complicated attributes. In this study, we highlight the importance of interaction in a dual-space GAN for more controllable editing. We propose TransEditor, a novel Transformer-based framework to enhance such interaction. Besides, we develop a new dual-space editing and inversion strategy to provide additional editing flexibility. Extensive experiments demonstrate the superiority of the proposed framework in image quality and editing capability, suggesting the effectiveness of TransEditor for highly controllable facial editing. Code and models are publicly available at https://github.com/BillyXYB/TransEditor. |
Persistent Identifier | http://hdl.handle.net/10722/352318 |
ISSN | 2023 SCImago Journal Rankings: 10.331 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Xu, Yanbo | - |
dc.contributor.author | Yin, Yueqin | - |
dc.contributor.author | Jiang, Liming | - |
dc.contributor.author | Wu, Qianyi | - |
dc.contributor.author | Zheng, Chengyao | - |
dc.contributor.author | Loy, Chen Change | - |
dc.contributor.author | Dai, Bo | - |
dc.contributor.author | Wu, Wayne | - |
dc.date.accessioned | 2024-12-16T03:58:13Z | - |
dc.date.available | 2024-12-16T03:58:13Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2022, v. 2022-June, p. 7673-7682 | - |
dc.identifier.issn | 1063-6919 | - |
dc.identifier.uri | http://hdl.handle.net/10722/352318 | - |
dc.description.abstract | Recent advances like StyleGAN have promoted the growth of controllable facial editing. To address its core challenge of attribute decoupling in a single latent space, attempts have been made to adopt dual-space GAN for better disentanglement of style and content representations. Nonetheless, these methods are still incompetent to obtain plausible editing results with high controllability, especially for complicated attributes. In this study, we highlight the importance of interaction in a dual-space GAN for more controllable editing. We propose TransEditor, a novel Transformer-based framework to enhance such interaction. Besides, we develop a new dual-space editing and inversion strategy to provide additional editing flexibility. Extensive experiments demonstrate the superiority of the proposed framework in image quality and editing capability, suggesting the effectiveness of TransEditor for highly controllable facial editing. Code and models are publicly available at https://github.com/BillyXYB/TransEditor. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | - |
dc.subject | Deep learning architectures and techniques | - |
dc.subject | Face and gestures | - |
dc.subject | Image and video synthesis and generation | - |
dc.title | TransEditor: Transformer-Based Dual-Space GAN for Highly Controllable Facial Editing | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/CVPR52688.2022.00753 | - |
dc.identifier.scopus | eid_2-s2.0-85140197551 | - |
dc.identifier.volume | 2022-June | - |
dc.identifier.spage | 7673 | - |
dc.identifier.epage | 7682 | - |