File Download

There are no files associated with this item.

Supplementary

Conference Paper: Disentangled Cycle Consistency for Highly-Realistic Virtual Try-On

TitleDisentangled Cycle Consistency for Highly-Realistic Virtual Try-On
Authors
Issue Date2021
Citation
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Virtual Conference, 19-25 June 2021, p. 16928-16937 How to Cite?
AbstractImage virtual try-on replaces the clothes on a person image with a desired in-shop clothes image. It is challenging because the person and the in-shop clothes are unpaired. Existing methods formulate virtual try-on as either in-painting or cycle consistency. Both of these two formulations encourage the generation networks to reconstruct the input image in a self-supervised manner. However, existing methods do not differentiate clothing and non-clothing regions. A straightforward generation impedes the virtual try-on quality because of the heavily coupled image contents. In this paper, we propose a Disentangled Cycleconsistency Try-On Network (DCTON). The DCTON is able to produce highly-realistic try-on images by disentangling important components of virtual try-on including clothes warping, skin synthesis, and image composition. Moreover, DCTON can be naturally trained in a self-supervised manner following cycle consistency learning. Extensive experiments on challenging benchmarks show that DCTON outperforms state-of-the-art approaches favorably.
DescriptionPaper Session Twelve: Paper ID 6703
Persistent Identifierhttp://hdl.handle.net/10722/301432

 

DC FieldValueLanguage
dc.contributor.authorGe, C-
dc.contributor.authorSong, Y-
dc.contributor.authorGe, Y-
dc.contributor.authorYang, H-
dc.contributor.authorLiu, W-
dc.contributor.authorLuo, P-
dc.date.accessioned2021-07-27T08:10:59Z-
dc.date.available2021-07-27T08:10:59Z-
dc.date.issued2021-
dc.identifier.citationProceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Virtual Conference, 19-25 June 2021, p. 16928-16937-
dc.identifier.urihttp://hdl.handle.net/10722/301432-
dc.descriptionPaper Session Twelve: Paper ID 6703-
dc.description.abstractImage virtual try-on replaces the clothes on a person image with a desired in-shop clothes image. It is challenging because the person and the in-shop clothes are unpaired. Existing methods formulate virtual try-on as either in-painting or cycle consistency. Both of these two formulations encourage the generation networks to reconstruct the input image in a self-supervised manner. However, existing methods do not differentiate clothing and non-clothing regions. A straightforward generation impedes the virtual try-on quality because of the heavily coupled image contents. In this paper, we propose a Disentangled Cycleconsistency Try-On Network (DCTON). The DCTON is able to produce highly-realistic try-on images by disentangling important components of virtual try-on including clothes warping, skin synthesis, and image composition. Moreover, DCTON can be naturally trained in a self-supervised manner following cycle consistency learning. Extensive experiments on challenging benchmarks show that DCTON outperforms state-of-the-art approaches favorably. -
dc.languageeng-
dc.relation.ispartofIEEE Computer Vision and Pattern Recognition (CVPR) Proceedings-
dc.titleDisentangled Cycle Consistency for Highly-Realistic Virtual Try-On-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.hkuros323758-
dc.identifier.spage16928-
dc.identifier.epage16937-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats