File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Focal Frequency Loss for Image Reconstruction and Synthesis

TitleFocal Frequency Loss for Image Reconstruction and Synthesis
Authors
Issue Date2021
Citation
Proceedings of the IEEE International Conference on Computer Vision, 2021, p. 13899-13909 How to Cite?
AbstractImage reconstruction and synthesis have witnessed remarkable progress thanks to the development of generative models. Nonetheless, gaps could still exist between the real and generated images, especially in the frequency domain. In this study, we show that narrowing gaps in the frequency domain can ameliorate image reconstruction and synthesis quality further. We propose a novel focal frequency loss, which allows a model to adaptively focus on frequency components that are hard to synthesize by down-weighting the easy ones. This objective function is complementary to existing spatial losses, offering great impedance against the loss of important frequency information due to the inherent bias of neural networks. We demonstrate the versatility and effectiveness of focal frequency loss to improve popular models, such as VAE, pix2pix, and SPADE, in both perceptual quality and quantitative performance. We further show its potential on StyleGAN2.
Persistent Identifierhttp://hdl.handle.net/10722/352255
ISSN
2023 SCImago Journal Rankings: 12.263
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorJiang, Liming-
dc.contributor.authorDai, Bo-
dc.contributor.authorWu, Wayne-
dc.contributor.authorLoy, Chen Change-
dc.date.accessioned2024-12-16T03:57:37Z-
dc.date.available2024-12-16T03:57:37Z-
dc.date.issued2021-
dc.identifier.citationProceedings of the IEEE International Conference on Computer Vision, 2021, p. 13899-13909-
dc.identifier.issn1550-5499-
dc.identifier.urihttp://hdl.handle.net/10722/352255-
dc.description.abstractImage reconstruction and synthesis have witnessed remarkable progress thanks to the development of generative models. Nonetheless, gaps could still exist between the real and generated images, especially in the frequency domain. In this study, we show that narrowing gaps in the frequency domain can ameliorate image reconstruction and synthesis quality further. We propose a novel focal frequency loss, which allows a model to adaptively focus on frequency components that are hard to synthesize by down-weighting the easy ones. This objective function is complementary to existing spatial losses, offering great impedance against the loss of important frequency information due to the inherent bias of neural networks. We demonstrate the versatility and effectiveness of focal frequency loss to improve popular models, such as VAE, pix2pix, and SPADE, in both perceptual quality and quantitative performance. We further show its potential on StyleGAN2.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE International Conference on Computer Vision-
dc.titleFocal Frequency Loss for Image Reconstruction and Synthesis-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICCV48922.2021.01366-
dc.identifier.scopuseid_2-s2.0-85119648162-
dc.identifier.spage13899-
dc.identifier.epage13909-
dc.identifier.isiWOS:000798743204010-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats