File Download

There are no files associated with this item.

Supplementary

Conference Paper: Speech2Lip: High-fidelity Speech to Lip Generation by Learning from a Short Video

TitleSpeech2Lip: High-fidelity Speech to Lip Generation by Learning from a Short Video
Authors
Issue Date2-Oct-2023
Abstract

Synthesizing realistic videos according to a given speech is still an open challenge. Previous works have been plagued by issues such as inaccurate lip shape generation and poor image quality. The key reason is that only motions and appearances on limited facial areas (e.g., lip area) are mainly driven by the input speech. Therefore, directly learning a mapping function from speech to the entire head image is prone to ambiguity, particularly when using a short video for training. We thus propose a decomposition-synthesis composition framework named Speech to Lip (Speech2Lip) that disentangles speech-sensitive and speech-insensitive motion/appearance to facilitate effective learning from limited training data, resulting in the generation of natural-looking videos. First, given a fixed head pose (i.e., canonical space), we present a speech-driven implicit model for lip image generation, which concentrates on learning speechsensitive motion and appearance. Next, to model the major speech-insensitive motion (i.e., head movement), we introduce a geometry-aware mutual explicit mapping (GAMEM) module that establishes geometric mappings between different head poses. This allows us to paste generated lip images at the canonical space onto head images with arbitrary poses and synthesize talking videos with natural head movements. In addition, a Blend-Net and a contrastive sync loss are introduced to enhance the overall synthesis performance. Quantitative and qualitative results on three benchmarks demonstrate that our model can be trained by a video of just a few minutes in length and achieve stateof-the-art performance in both visual quality and speechvisual synchronization. Code: https://github.com/CVMILab/Speech2Lip.


Persistent Identifierhttp://hdl.handle.net/10722/340973

 

DC FieldValueLanguage
dc.contributor.authorWu, Xiuzhe-
dc.contributor.authorHu, Pengfei-
dc.contributor.authorWu, Yang-
dc.contributor.authorLyu, Xiaoyang-
dc.contributor.authorCao, Yan-Pei-
dc.contributor.authorShan, Ying-
dc.contributor.authorYang, Wenming-
dc.contributor.authorSun, Zhongqian-
dc.contributor.authorQi, Xiaojuan-
dc.date.accessioned2024-03-11T10:48:44Z-
dc.date.available2024-03-11T10:48:44Z-
dc.date.issued2023-10-02-
dc.identifier.urihttp://hdl.handle.net/10722/340973-
dc.description.abstract<p>Synthesizing realistic videos according to a given speech is still an open challenge. Previous works have been plagued by issues such as inaccurate lip shape generation and poor image quality. The key reason is that only motions and appearances on limited facial areas (e.g., lip area) are mainly driven by the input speech. Therefore, directly learning a mapping function from speech to the entire head image is prone to ambiguity, particularly when using a short video for training. We thus propose a decomposition-synthesis composition framework named Speech to Lip (Speech2Lip) that disentangles speech-sensitive and speech-insensitive motion/appearance to facilitate effective learning from limited training data, resulting in the generation of natural-looking videos. First, given a fixed head pose (i.e., canonical space), we present a speech-driven implicit model for lip image generation, which concentrates on learning speechsensitive motion and appearance. Next, to model the major speech-insensitive motion (i.e., head movement), we introduce a geometry-aware mutual explicit mapping (GAMEM) module that establishes geometric mappings between different head poses. This allows us to paste generated lip images at the canonical space onto head images with arbitrary poses and synthesize talking videos with natural head movements. In addition, a Blend-Net and a contrastive sync loss are introduced to enhance the overall synthesis performance. Quantitative and qualitative results on three benchmarks demonstrate that our model can be trained by a video of just a few minutes in length and achieve stateof-the-art performance in both visual quality and speechvisual synchronization. Code: https://github.com/CVMILab/Speech2Lip.</p>-
dc.languageeng-
dc.relation.ispartofIEEE International Conference on Computer Vision 2023 (02/10/2023-06/10/2023, Paris)-
dc.titleSpeech2Lip: High-fidelity Speech to Lip Generation by Learning from a Short Video-
dc.typeConference_Paper-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats