File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: Progressive semantic-aware style transformation for blind face restoration
Title | Progressive semantic-aware style transformation for blind face restoration |
---|---|
Authors | |
Issue Date | 2021 |
Publisher | IEEE. |
Citation | IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2021, Virtual Conference, 19-25 June 2021, p. 11896-11905 How to Cite? |
Abstract | Face restoration is important in face image processing, and has been widely studied in recent years. However, previous works often fail to generate plausible high quality (HQ) results for real-world low quality (LQ) face images. In this paper, we propose a new progressive semantic-aware style transformation framework, named PSFR-GAN, for face restoration. Specifically, instead of using an encoder-decoder framework as previous methods, we formulate the restoration of LQ face images as a multi-scale progressive restoration procedure through semantic-aware style transformation. Given a pair of LQ face image and its corresponding parsing map, we first generate a multi-scale pyramid of the inputs, and then progressively modulate different scale features from coarse-to-fine in a semantic-aware style transfer way. Compared with previous networks, the proposed PSFR-GAN makes full use of the semantic (parsing maps) and pixel (LQ images) space information from different scales of input pairs. In addition, we further introduce a semantic aware style loss which calculates the feature style loss for each semantic region individually to improve the details of face textures. Finally, we pretrain a face parsing network which can generate decent parsing maps from real-world LQ face images. Experiment results show that our model trained with synthetic data can not only produce more realistic high-resolution results for synthetic LQ inputs and but also generalize better to natural LQ face images compared with state-of-the-art methods. |
Description | Paper Session Nine: Paper ID 2234 |
Persistent Identifier | http://hdl.handle.net/10722/301146 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chen, C | - |
dc.contributor.author | Li, X | - |
dc.contributor.author | Yang, L | - |
dc.contributor.author | Lin, X | - |
dc.contributor.author | Zhang, L | - |
dc.contributor.author | Wong, KKY | - |
dc.date.accessioned | 2021-07-27T08:06:48Z | - |
dc.date.available | 2021-07-27T08:06:48Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2021, Virtual Conference, 19-25 June 2021, p. 11896-11905 | - |
dc.identifier.uri | http://hdl.handle.net/10722/301146 | - |
dc.description | Paper Session Nine: Paper ID 2234 | - |
dc.description.abstract | Face restoration is important in face image processing, and has been widely studied in recent years. However, previous works often fail to generate plausible high quality (HQ) results for real-world low quality (LQ) face images. In this paper, we propose a new progressive semantic-aware style transformation framework, named PSFR-GAN, for face restoration. Specifically, instead of using an encoder-decoder framework as previous methods, we formulate the restoration of LQ face images as a multi-scale progressive restoration procedure through semantic-aware style transformation. Given a pair of LQ face image and its corresponding parsing map, we first generate a multi-scale pyramid of the inputs, and then progressively modulate different scale features from coarse-to-fine in a semantic-aware style transfer way. Compared with previous networks, the proposed PSFR-GAN makes full use of the semantic (parsing maps) and pixel (LQ images) space information from different scales of input pairs. In addition, we further introduce a semantic aware style loss which calculates the feature style loss for each semantic region individually to improve the details of face textures. Finally, we pretrain a face parsing network which can generate decent parsing maps from real-world LQ face images. Experiment results show that our model trained with synthetic data can not only produce more realistic high-resolution results for synthetic LQ inputs and but also generalize better to natural LQ face images compared with state-of-the-art methods. | - |
dc.language | eng | - |
dc.publisher | IEEE. | - |
dc.relation.ispartof | IEEE Conference on Computer Vision and Pattern Recognition (CVPR) | - |
dc.rights | IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Copyright © IEEE. | - |
dc.title | Progressive semantic-aware style transformation for blind face restoration | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Wong, KKY: kykwong@cs.hku.hk | - |
dc.identifier.authority | Wong, KKY=rp01393 | - |
dc.identifier.hkuros | 323469 | - |
dc.identifier.spage | 11896 | - |
dc.identifier.epage | 11905 | - |