File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Context-Aware Semantic Inpainting

TitleContext-Aware Semantic Inpainting
Authors
KeywordsConvolutional neural network
generative adversarial network (GAN)
image inpainting
Issue Date2018
PublisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6221036
Citation
IEEE Transactions on Cybernetics, 2018, v. 49 n. 12, p. 4398-4411 How to Cite?
AbstractIn recent times, image inpainting has witnessed rapid progress due to the generative adversarial networks (GANs) that are able to synthesize realistic contents. However, most existing GAN-based methods for semantic inpainting apply an auto-encoder architecture with a fully connected layer, which cannot accurately maintain spatial information. In addition, the discriminator in existing GANs struggles to comprehend high-level semantics within the image context and yields semantically consistent content. Existing evaluation criteria are biased toward blurry results and cannot well characterize edge preservation and visual authenticity in the inpainting results. In this paper, we propose an improved GAN to overcome the aforementioned limitations. Our proposed GAN-based framework consists of a fully convolutional design for the generator which helps to better preserve spatial structures and a joint loss function with a revised perceptual loss to capture high-level semantics in the context. Furthermore, we also introduce two novel measures to better assess the quality of image inpainting results. The experimental results demonstrate that our method outperforms the state-of-the-art under a wide range of criteria.
Persistent Identifierhttp://hdl.handle.net/10722/278768
ISSN
2021 Impact Factor: 19.118
2020 SCImago Journal Rankings: 3.109
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLi, H-
dc.contributor.authorLi, G-
dc.contributor.authorLin, L-
dc.contributor.authorYu, H-
dc.contributor.authorYu, Y-
dc.date.accessioned2019-10-21T02:13:43Z-
dc.date.available2019-10-21T02:13:43Z-
dc.date.issued2018-
dc.identifier.citationIEEE Transactions on Cybernetics, 2018, v. 49 n. 12, p. 4398-4411-
dc.identifier.issn2168-2267-
dc.identifier.urihttp://hdl.handle.net/10722/278768-
dc.description.abstractIn recent times, image inpainting has witnessed rapid progress due to the generative adversarial networks (GANs) that are able to synthesize realistic contents. However, most existing GAN-based methods for semantic inpainting apply an auto-encoder architecture with a fully connected layer, which cannot accurately maintain spatial information. In addition, the discriminator in existing GANs struggles to comprehend high-level semantics within the image context and yields semantically consistent content. Existing evaluation criteria are biased toward blurry results and cannot well characterize edge preservation and visual authenticity in the inpainting results. In this paper, we propose an improved GAN to overcome the aforementioned limitations. Our proposed GAN-based framework consists of a fully convolutional design for the generator which helps to better preserve spatial structures and a joint loss function with a revised perceptual loss to capture high-level semantics in the context. Furthermore, we also introduce two novel measures to better assess the quality of image inpainting results. The experimental results demonstrate that our method outperforms the state-of-the-art under a wide range of criteria.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6221036-
dc.relation.ispartofIEEE Transactions on Cybernetics-
dc.rightsIEEE Transactions on Cybernetics. Copyright © Institute of Electrical and Electronics Engineers.-
dc.rights©20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectConvolutional neural network-
dc.subjectgenerative adversarial network (GAN)-
dc.subjectimage inpainting-
dc.titleContext-Aware Semantic Inpainting-
dc.typeArticle-
dc.identifier.emailYu, Y: yzyu@cs.hku.hk-
dc.identifier.authorityYu, Y=rp01415-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TCYB.2018.2865036-
dc.identifier.pmid30334809-
dc.identifier.scopuseid_2-s2.0-85055021587-
dc.identifier.hkuros307666-
dc.identifier.volume49-
dc.identifier.issue12-
dc.identifier.spage4398-
dc.identifier.epage4411-
dc.identifier.isiWOS:000485687200029-
dc.publisher.placeUnited States-
dc.identifier.issnl2168-2267-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats