File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Learning Spatial Attention for Face Super-Resolution

TitleLearning Spatial Attention for Face Super-Resolution
Authors
KeywordsFace super-resolution
spatial attention
generative adversarial networks
Issue Date2021
PublisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=83
Citation
IEEE Transactions on Image Processing, 2021, v. 30, p. 1219-1231 How to Cite?
AbstractGeneral image super-resolution techniques have difficulties in recovering detailed face structures when applying to low resolution face images. Recent deep learning based methods tailored for face images have achieved improved performance by jointly trained with additional task such as face parsing and landmark prediction. However, multi-task learning requires extra manually labeled data. Besides, most of the existing works can only generate relatively low resolution face images (e.g., 128 × 128), and their applications are therefore limited. In this paper, we introduce a novel SPatial Attention Residual Network (SPARNet) built on our newly proposed Face Attention Units (FAUs) for face super-resolution. Specifically, we introduce a spatial attention mechanism to the vanilla residual blocks. This enables the convolutional layers to adaptively bootstrap features related to the key face structures and pay less attention to those less feature-rich regions. This makes the training more effective and efficient as the key face structures only account for a very small portion of the face image. Visualization of the attention maps shows that our spatial attention network can capture the key face structures well even for very low resolution faces (e.g., 16×16). Quantitative comparisons on various kinds of metrics (including PSNR, SSIM, identity similarity, and landmark detection) demonstrate the superiority of our method over current state-of-the-arts. We further extend SPARNet with multi-scale discriminators, named as SPARNetHD, to produce high resolution results (i.e., 512×512). We show that SPARNetHD trained with synthetic data can not only produce high quality and high resolution outputs for synthetically degraded face images, but also show good generalization ability to real world low quality face images. Codes are available at https://github.com/chaofengc/Face-SPARNet.
Persistent Identifierhttp://hdl.handle.net/10722/301191
ISSN
2021 Impact Factor: 11.041
2020 SCImago Journal Rankings: 1.778
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorChen, C-
dc.contributor.authorGong, D-
dc.contributor.authorWang, H-
dc.contributor.authorLi, Z-
dc.contributor.authorWong, KYK-
dc.date.accessioned2021-07-27T08:07:28Z-
dc.date.available2021-07-27T08:07:28Z-
dc.date.issued2021-
dc.identifier.citationIEEE Transactions on Image Processing, 2021, v. 30, p. 1219-1231-
dc.identifier.issn1057-7149-
dc.identifier.urihttp://hdl.handle.net/10722/301191-
dc.description.abstractGeneral image super-resolution techniques have difficulties in recovering detailed face structures when applying to low resolution face images. Recent deep learning based methods tailored for face images have achieved improved performance by jointly trained with additional task such as face parsing and landmark prediction. However, multi-task learning requires extra manually labeled data. Besides, most of the existing works can only generate relatively low resolution face images (e.g., 128 × 128), and their applications are therefore limited. In this paper, we introduce a novel SPatial Attention Residual Network (SPARNet) built on our newly proposed Face Attention Units (FAUs) for face super-resolution. Specifically, we introduce a spatial attention mechanism to the vanilla residual blocks. This enables the convolutional layers to adaptively bootstrap features related to the key face structures and pay less attention to those less feature-rich regions. This makes the training more effective and efficient as the key face structures only account for a very small portion of the face image. Visualization of the attention maps shows that our spatial attention network can capture the key face structures well even for very low resolution faces (e.g., 16×16). Quantitative comparisons on various kinds of metrics (including PSNR, SSIM, identity similarity, and landmark detection) demonstrate the superiority of our method over current state-of-the-arts. We further extend SPARNet with multi-scale discriminators, named as SPARNetHD, to produce high resolution results (i.e., 512×512). We show that SPARNetHD trained with synthetic data can not only produce high quality and high resolution outputs for synthetically degraded face images, but also show good generalization ability to real world low quality face images. Codes are available at https://github.com/chaofengc/Face-SPARNet.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=83-
dc.relation.ispartofIEEE Transactions on Image Processing-
dc.rightsIEEE Transactions on Image Processing. Copyright © Institute of Electrical and Electronics Engineers.-
dc.rights©2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectFace super-resolution-
dc.subjectspatial attention-
dc.subjectgenerative adversarial networks-
dc.titleLearning Spatial Attention for Face Super-Resolution-
dc.typeArticle-
dc.identifier.emailWong, KYK: kykwong@cs.hku.hk-
dc.identifier.authorityWong, KYK=rp01393-
dc.description.naturepostprint-
dc.identifier.doi10.1109/TIP.2020.3043093-
dc.identifier.pmid33315560-
dc.identifier.scopuseid_2-s2.0-85098119837-
dc.identifier.hkuros323623-
dc.identifier.hkuros323470-
dc.identifier.volume30-
dc.identifier.spage1219-
dc.identifier.epage1231-
dc.identifier.isiWOS:000603026100002-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats