File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Talking face generation by adversarially disentangled audio-visual representation

TitleTalking face generation by adversarially disentangled audio-visual representation
Authors
Issue Date2019
PublisherAAAI Press. The Journal's web site is located at https://aaai.org/Library/AAAI/aaai-library.php
Citation
Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI-19), Honolulu, Hawaii, USA, 27 January – 1 February 2019, v. 33 n. 1, p. 9299-9306 How to Cite?
AbstractTalking face generation aims to synthesize a sequence of face images that correspond to a clip of speech. This is a challenging task because face appearance variation and semantics of speech are coupled together in the subtle movements of the talking face regions. Existing works either construct specific face appearance model on specific subjects or model the transformation between lip motion and speech. In this work, we integrate both aspects and enable arbitrary-subject talking face generation by learning disentangled audio-visual representation. We find that the talking face sequence is actually a composition of both subject-related information and speech-related information. These two spaces are then explicitly disentangled through a novel associative-and-adversarial training process. This disentangled representation has an advantage where both audio and video can serve as inputs for generation. Extensive experiments show that the proposed approach generates realistic talking face sequences on arbitrary subjects with much clearer lip motion patterns than previous work. We also demonstrate the learned audio-visual representation is extremely useful for the tasks of automatic lip reading and audio-video retrieval.
DescriptionAAAI Technical Track: Vision
Persistent Identifierhttp://hdl.handle.net/10722/284260
ISSN

 

DC FieldValueLanguage
dc.contributor.authorZhou, H-
dc.contributor.authorLiu, Y-
dc.contributor.authorLiu, Z-
dc.contributor.authorLuo, P-
dc.contributor.authorWang, X-
dc.date.accessioned2020-07-20T05:57:19Z-
dc.date.available2020-07-20T05:57:19Z-
dc.date.issued2019-
dc.identifier.citationProceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI-19), Honolulu, Hawaii, USA, 27 January – 1 February 2019, v. 33 n. 1, p. 9299-9306-
dc.identifier.issn2159-5399-
dc.identifier.urihttp://hdl.handle.net/10722/284260-
dc.descriptionAAAI Technical Track: Vision-
dc.description.abstractTalking face generation aims to synthesize a sequence of face images that correspond to a clip of speech. This is a challenging task because face appearance variation and semantics of speech are coupled together in the subtle movements of the talking face regions. Existing works either construct specific face appearance model on specific subjects or model the transformation between lip motion and speech. In this work, we integrate both aspects and enable arbitrary-subject talking face generation by learning disentangled audio-visual representation. We find that the talking face sequence is actually a composition of both subject-related information and speech-related information. These two spaces are then explicitly disentangled through a novel associative-and-adversarial training process. This disentangled representation has an advantage where both audio and video can serve as inputs for generation. Extensive experiments show that the proposed approach generates realistic talking face sequences on arbitrary subjects with much clearer lip motion patterns than previous work. We also demonstrate the learned audio-visual representation is extremely useful for the tasks of automatic lip reading and audio-video retrieval.-
dc.languageeng-
dc.publisherAAAI Press. The Journal's web site is located at https://aaai.org/Library/AAAI/aaai-library.php-
dc.relation.ispartofProceedings of the AAAI Conference on Artificial Intelligence-
dc.rightsCopyright (c) 2019 Association for the Advancement of Artificial Intelligence-
dc.titleTalking face generation by adversarially disentangled audio-visual representation-
dc.typeArticle-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.doi10.1609/aaai.v33i01.33019299-
dc.identifier.hkuros311003-
dc.identifier.volume33-
dc.identifier.issue1-
dc.identifier.spage9299-
dc.identifier.epage9306-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats