File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: IntersectGan: Learning domain intersection for generating images with multiple attributes

TitleIntersectGan: Learning domain intersection for generating images with multiple attributes
Authors
KeywordsDeep learning
Generative adversarial networks
Image generation
Issue Date2019
Citation
MM 2019 - Proceedings of the 27th ACM International Conference on Multimedia, 2019, p. 1842-1850 How to Cite?
AbstractGenerative adversarial networks (GANs) have demonstrated great success in generating various visual content. However, images generated by existing GANs are often of attributes (e.g., smiling expression) learned from one image domain. As a result, generating images of multiple attributes requires many real samples possessing multiple attributes which are very resource expensive to be collected. In this paper, we propose a novel GAN, namely IntersectGAN, to learn multiple attributes from different image domains through an intersecting architecture. For example, given two image domains X1 and X2 with certain attributes, the intersection X1 ∩ X2 denotes a new domain where images possess the attributes from both X1 and X2 domains. The proposed IntersectGAN consists of two discriminators D1 and D2 to distinguish between generated and real samples of different domains, and three generators where the intersection generator is trained against both discriminators. And an overall adversarial loss function is defined over three generators. As a result, our proposed IntersectGAN can be trained on multiple domains of which each presents one specific attribute, and eventually eliminates the need of real sample images simultaneously possessing multiple attributes. By using the CelebFaces Attributes dataset, our proposed IntersectGAN is able to produce high quality face images possessing multiple attributes (e.g., a face with black hair and a smiling expression). Both qualitative and quantitative evaluations are conducted to compare our proposed IntersectGAN with other baseline methods. Besides, several different applications of IntersectGAN have been explored with promising results.
Persistent Identifierhttp://hdl.handle.net/10722/321858
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorYao, Zehui-
dc.contributor.authorZhang, Boyan-
dc.contributor.authorWang, Zhiyong-
dc.contributor.authorOuyang, Wanli-
dc.contributor.authorXu, Dong-
dc.contributor.authorFeng, Dagan-
dc.date.accessioned2022-11-03T02:21:55Z-
dc.date.available2022-11-03T02:21:55Z-
dc.date.issued2019-
dc.identifier.citationMM 2019 - Proceedings of the 27th ACM International Conference on Multimedia, 2019, p. 1842-1850-
dc.identifier.urihttp://hdl.handle.net/10722/321858-
dc.description.abstractGenerative adversarial networks (GANs) have demonstrated great success in generating various visual content. However, images generated by existing GANs are often of attributes (e.g., smiling expression) learned from one image domain. As a result, generating images of multiple attributes requires many real samples possessing multiple attributes which are very resource expensive to be collected. In this paper, we propose a novel GAN, namely IntersectGAN, to learn multiple attributes from different image domains through an intersecting architecture. For example, given two image domains X1 and X2 with certain attributes, the intersection X1 ∩ X2 denotes a new domain where images possess the attributes from both X1 and X2 domains. The proposed IntersectGAN consists of two discriminators D1 and D2 to distinguish between generated and real samples of different domains, and three generators where the intersection generator is trained against both discriminators. And an overall adversarial loss function is defined over three generators. As a result, our proposed IntersectGAN can be trained on multiple domains of which each presents one specific attribute, and eventually eliminates the need of real sample images simultaneously possessing multiple attributes. By using the CelebFaces Attributes dataset, our proposed IntersectGAN is able to produce high quality face images possessing multiple attributes (e.g., a face with black hair and a smiling expression). Both qualitative and quantitative evaluations are conducted to compare our proposed IntersectGAN with other baseline methods. Besides, several different applications of IntersectGAN have been explored with promising results.-
dc.languageeng-
dc.relation.ispartofMM 2019 - Proceedings of the 27th ACM International Conference on Multimedia-
dc.subjectDeep learning-
dc.subjectGenerative adversarial networks-
dc.subjectImage generation-
dc.titleIntersectGan: Learning domain intersection for generating images with multiple attributes-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3343031.3350908-
dc.identifier.scopuseid_2-s2.0-85074827754-
dc.identifier.spage1842-
dc.identifier.epage1850-
dc.identifier.isiWOS:000509743400227-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats