File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Generalization and equilibrium in generative adversarial nets (GANs)

TitleGeneralization and equilibrium in generative adversarial nets (GANs)
Authors
Issue Date2017
Citation
34th International Conference on Machine Learning, ICML 2017, 2017, v. 1, p. 322-349 How to Cite?
AbstractGeneralization is defined training of generative adversarial network (GAN), and it's shown that generalization is not guaranteed for the popular distances between distributions such as Jensen-Shannon or Wasserstein. In particular, training may appear to be successful and yet the trained distribution may be arbitrarily far from the target distribution in standard metrics. It is shown that generalization does occur for a much weaker metric we call neural net distance. It is also shown that an approximate pure equilibrium exists in the discriminator/generator game for a natural training objective (Wasserstein) when generator capacity and training set sizes are moderate. Finally, the above theoretical ideas suggest a new training protocol, mix+GAN, which can be combined with any existing method, and empirically is found to improves some existing GAN protocols out of the box.
Persistent Identifierhttp://hdl.handle.net/10722/341227

 

DC FieldValueLanguage
dc.contributor.authorArora, Sanjeev-
dc.contributor.authorGe, Rong-
dc.contributor.authorLiang, Yingyu-
dc.contributor.authorMa, Tengyu-
dc.contributor.authorZhang, Yi-
dc.date.accessioned2024-03-13T08:41:10Z-
dc.date.available2024-03-13T08:41:10Z-
dc.date.issued2017-
dc.identifier.citation34th International Conference on Machine Learning, ICML 2017, 2017, v. 1, p. 322-349-
dc.identifier.urihttp://hdl.handle.net/10722/341227-
dc.description.abstractGeneralization is defined training of generative adversarial network (GAN), and it's shown that generalization is not guaranteed for the popular distances between distributions such as Jensen-Shannon or Wasserstein. In particular, training may appear to be successful and yet the trained distribution may be arbitrarily far from the target distribution in standard metrics. It is shown that generalization does occur for a much weaker metric we call neural net distance. It is also shown that an approximate pure equilibrium exists in the discriminator/generator game for a natural training objective (Wasserstein) when generator capacity and training set sizes are moderate. Finally, the above theoretical ideas suggest a new training protocol, mix+GAN, which can be combined with any existing method, and empirically is found to improves some existing GAN protocols out of the box.-
dc.languageeng-
dc.relation.ispartof34th International Conference on Machine Learning, ICML 2017-
dc.titleGeneralization and equilibrium in generative adversarial nets (GANs)-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85048679392-
dc.identifier.volume1-
dc.identifier.spage322-
dc.identifier.epage349-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats