File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TCYB.2018.2865036
- Scopus: eid_2-s2.0-85055021587
- PMID: 30334809
- WOS: WOS:000485687200029
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Context-Aware Semantic Inpainting
Title | Context-Aware Semantic Inpainting |
---|---|
Authors | |
Keywords | Convolutional neural network generative adversarial network (GAN) image inpainting |
Issue Date | 2018 |
Publisher | Institute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6221036 |
Citation | IEEE Transactions on Cybernetics, 2018, v. 49 n. 12, p. 4398-4411 How to Cite? |
Abstract | In recent times, image inpainting has witnessed rapid progress due to the generative adversarial networks (GANs) that are able to synthesize realistic contents. However, most existing GAN-based methods for semantic inpainting apply an auto-encoder architecture with a fully connected layer, which cannot accurately maintain spatial information. In addition, the discriminator in existing GANs struggles to comprehend high-level semantics within the image context and yields semantically consistent content. Existing evaluation criteria are biased toward blurry results and cannot well characterize edge preservation and visual authenticity in the inpainting results. In this paper, we propose an improved GAN to overcome the aforementioned limitations. Our proposed GAN-based framework consists of a fully convolutional design for the generator which helps to better preserve spatial structures and a joint loss function with a revised perceptual loss to capture high-level semantics in the context. Furthermore, we also introduce two novel measures to better assess the quality of image inpainting results. The experimental results demonstrate that our method outperforms the state-of-the-art under a wide range of criteria. |
Persistent Identifier | http://hdl.handle.net/10722/278768 |
ISSN | 2023 Impact Factor: 9.4 2023 SCImago Journal Rankings: 5.641 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Li, H | - |
dc.contributor.author | Li, G | - |
dc.contributor.author | Lin, L | - |
dc.contributor.author | Yu, H | - |
dc.contributor.author | Yu, Y | - |
dc.date.accessioned | 2019-10-21T02:13:43Z | - |
dc.date.available | 2019-10-21T02:13:43Z | - |
dc.date.issued | 2018 | - |
dc.identifier.citation | IEEE Transactions on Cybernetics, 2018, v. 49 n. 12, p. 4398-4411 | - |
dc.identifier.issn | 2168-2267 | - |
dc.identifier.uri | http://hdl.handle.net/10722/278768 | - |
dc.description.abstract | In recent times, image inpainting has witnessed rapid progress due to the generative adversarial networks (GANs) that are able to synthesize realistic contents. However, most existing GAN-based methods for semantic inpainting apply an auto-encoder architecture with a fully connected layer, which cannot accurately maintain spatial information. In addition, the discriminator in existing GANs struggles to comprehend high-level semantics within the image context and yields semantically consistent content. Existing evaluation criteria are biased toward blurry results and cannot well characterize edge preservation and visual authenticity in the inpainting results. In this paper, we propose an improved GAN to overcome the aforementioned limitations. Our proposed GAN-based framework consists of a fully convolutional design for the generator which helps to better preserve spatial structures and a joint loss function with a revised perceptual loss to capture high-level semantics in the context. Furthermore, we also introduce two novel measures to better assess the quality of image inpainting results. The experimental results demonstrate that our method outperforms the state-of-the-art under a wide range of criteria. | - |
dc.language | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6221036 | - |
dc.relation.ispartof | IEEE Transactions on Cybernetics | - |
dc.rights | IEEE Transactions on Cybernetics. Copyright © Institute of Electrical and Electronics Engineers. | - |
dc.rights | ©20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | - |
dc.subject | Convolutional neural network | - |
dc.subject | generative adversarial network (GAN) | - |
dc.subject | image inpainting | - |
dc.title | Context-Aware Semantic Inpainting | - |
dc.type | Article | - |
dc.identifier.email | Yu, Y: yzyu@cs.hku.hk | - |
dc.identifier.authority | Yu, Y=rp01415 | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TCYB.2018.2865036 | - |
dc.identifier.pmid | 30334809 | - |
dc.identifier.scopus | eid_2-s2.0-85055021587 | - |
dc.identifier.hkuros | 307666 | - |
dc.identifier.volume | 49 | - |
dc.identifier.issue | 12 | - |
dc.identifier.spage | 4398 | - |
dc.identifier.epage | 4411 | - |
dc.identifier.isi | WOS:000485687200029 | - |
dc.publisher.place | United States | - |
dc.identifier.issnl | 2168-2267 | - |