File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Learning Modulated Transformation in GANs

TitleLearning Modulated Transformation in GANs
Authors
Issue Date2023
Citation
Advances in Neural Information Processing Systems, 2023, v. 36 How to Cite?
AbstractThe success of style-based generators largely benefits from style modulation, which helps take care of the cross-instance variation within data. However, the instance-wise stochasticity is typically introduced via regular convolution, where kernels interact with features at some fixed locations, limiting its capacity for modeling geometric variation. To alleviate this problem, we equip the generator in generative adversarial networks (GANs) with a plug-and-play module, termed as modulated transformation module (MTM). This module predicts spatial offsets under the control of latent codes, based on which the convolution operation can be applied at variable locations for different instances, and hence offers the model an additional degree of freedom to handle geometry deformation. Extensive experiments suggest that our approach can be faithfully generalized to various generative tasks, including image generation, 3D-aware image synthesis, and video generation, and get compatible with state-of-the-art frameworks without any hyper-parameter tuning. It is noteworthy that, towards human generation on the challenging TaiChi dataset, we improve the FID of StyleGAN3 from 21.36 to 13.60, demonstrating the efficacy of learning modulated geometry transformation. Code and models are available at https://github.com/limbo0000/mtm.
Persistent Identifierhttp://hdl.handle.net/10722/352429
ISSN
2020 SCImago Journal Rankings: 1.399

 

DC FieldValueLanguage
dc.contributor.authorYang, Ceyuan-
dc.contributor.authorZhang, Qihang-
dc.contributor.authorXu, Yinghao-
dc.contributor.authorZhu, Jiapeng-
dc.contributor.authorShen, Yujun-
dc.contributor.authorDai, Bo-
dc.date.accessioned2024-12-16T03:58:53Z-
dc.date.available2024-12-16T03:58:53Z-
dc.date.issued2023-
dc.identifier.citationAdvances in Neural Information Processing Systems, 2023, v. 36-
dc.identifier.issn1049-5258-
dc.identifier.urihttp://hdl.handle.net/10722/352429-
dc.description.abstractThe success of style-based generators largely benefits from style modulation, which helps take care of the cross-instance variation within data. However, the instance-wise stochasticity is typically introduced via regular convolution, where kernels interact with features at some fixed locations, limiting its capacity for modeling geometric variation. To alleviate this problem, we equip the generator in generative adversarial networks (GANs) with a plug-and-play module, termed as modulated transformation module (MTM). This module predicts spatial offsets under the control of latent codes, based on which the convolution operation can be applied at variable locations for different instances, and hence offers the model an additional degree of freedom to handle geometry deformation. Extensive experiments suggest that our approach can be faithfully generalized to various generative tasks, including image generation, 3D-aware image synthesis, and video generation, and get compatible with state-of-the-art frameworks without any hyper-parameter tuning. It is noteworthy that, towards human generation on the challenging TaiChi dataset, we improve the FID of StyleGAN3 from 21.36 to 13.60, demonstrating the efficacy of learning modulated geometry transformation. Code and models are available at https://github.com/limbo0000/mtm.-
dc.languageeng-
dc.relation.ispartofAdvances in Neural Information Processing Systems-
dc.titleLearning Modulated Transformation in GANs-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85191164604-
dc.identifier.volume36-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats