File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: Disentangled Cycle Consistency for Highly-Realistic Virtual Try-On
Title | Disentangled Cycle Consistency for Highly-Realistic Virtual Try-On |
---|---|
Authors | |
Issue Date | 2021 |
Citation | Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Virtual Conference, 19-25 June 2021, p. 16928-16937 How to Cite? |
Abstract | Image virtual try-on replaces the clothes on a person image with a desired in-shop clothes image. It is challenging because the person and the in-shop clothes are unpaired. Existing methods formulate virtual try-on as either in-painting or cycle consistency. Both of these two formulations encourage the generation networks to reconstruct the input image in a self-supervised manner. However, existing methods do not differentiate clothing and non-clothing regions. A straightforward generation impedes the virtual try-on quality because of the heavily coupled image contents. In this paper, we propose a Disentangled Cycleconsistency Try-On Network (DCTON). The DCTON is able to produce highly-realistic try-on images by disentangling important components of virtual try-on including clothes
warping, skin synthesis, and image composition. Moreover, DCTON can be naturally trained in a self-supervised manner following cycle consistency learning. Extensive experiments on challenging benchmarks show that DCTON outperforms state-of-the-art approaches favorably.
|
Description | Paper Session Twelve: Paper ID 6703 |
Persistent Identifier | http://hdl.handle.net/10722/301432 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ge, C | - |
dc.contributor.author | Song, Y | - |
dc.contributor.author | Ge, Y | - |
dc.contributor.author | Yang, H | - |
dc.contributor.author | Liu, W | - |
dc.contributor.author | Luo, P | - |
dc.date.accessioned | 2021-07-27T08:10:59Z | - |
dc.date.available | 2021-07-27T08:10:59Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Virtual Conference, 19-25 June 2021, p. 16928-16937 | - |
dc.identifier.uri | http://hdl.handle.net/10722/301432 | - |
dc.description | Paper Session Twelve: Paper ID 6703 | - |
dc.description.abstract | Image virtual try-on replaces the clothes on a person image with a desired in-shop clothes image. It is challenging because the person and the in-shop clothes are unpaired. Existing methods formulate virtual try-on as either in-painting or cycle consistency. Both of these two formulations encourage the generation networks to reconstruct the input image in a self-supervised manner. However, existing methods do not differentiate clothing and non-clothing regions. A straightforward generation impedes the virtual try-on quality because of the heavily coupled image contents. In this paper, we propose a Disentangled Cycleconsistency Try-On Network (DCTON). The DCTON is able to produce highly-realistic try-on images by disentangling important components of virtual try-on including clothes warping, skin synthesis, and image composition. Moreover, DCTON can be naturally trained in a self-supervised manner following cycle consistency learning. Extensive experiments on challenging benchmarks show that DCTON outperforms state-of-the-art approaches favorably. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Computer Vision and Pattern Recognition (CVPR) Proceedings | - |
dc.title | Disentangled Cycle Consistency for Highly-Realistic Virtual Try-On | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Luo, P: pluo@hku.hk | - |
dc.identifier.authority | Luo, P=rp02575 | - |
dc.identifier.hkuros | 323758 | - |
dc.identifier.spage | 16928 | - |
dc.identifier.epage | 16937 | - |