File Download

There are no files associated with this item.

Supplementary

Conference Paper: Multi-identity Human Image Animation with Structural Video Diffusion

TitleMulti-identity Human Image Animation with Structural Video Diffusion
Authors
Issue Date26-Jun-2025
Abstract

Generating human videos from a single image while ensuring high visual quality and precise control is a challenging task, especially in complex scenarios involving multiple individuals and interactions with objects. Existing methods, while effective for single-human cases, often fail to handle the intricacies of multi-identity interactions because they struggle to associate the correct pairs of human appearance and pose condition and model the distribution of 3D-aware dynamics. To address these limitations, we present Structural Video Diffusion, a novel framework designed for generating realistic multi-human videos. Our approach introduces two core innovations: identity-specific embeddings to maintain consistent appearances across individuals and a structural learning mechanism that incorporates depth and surface-normal cues to model human-object interactions. Additionally, we expand existing human video dataset with 25K new videos featuring diverse multi-human and object interaction scenarios, providing a robust foundation for training. Experimental results demonstrate that Structural Video Diffusion achieves superior performance in generating lifelike, coherent videos for multiple subjects with dynamic and rich interactions, advancing the state of human-centric video generation.


Persistent Identifierhttp://hdl.handle.net/10722/359011

 

DC FieldValueLanguage
dc.contributor.authorWang, Zhenzhi-
dc.contributor.authorLi, Yixuan-
dc.contributor.authorZeng, Yanhong-
dc.contributor.authorGuo, Yuwei-
dc.contributor.authorLin, Dahua-
dc.contributor.authorXue, Tianfan-
dc.contributor.authorDai, Bo-
dc.date.accessioned2025-08-19T00:32:05Z-
dc.date.available2025-08-19T00:32:05Z-
dc.date.issued2025-06-26-
dc.identifier.urihttp://hdl.handle.net/10722/359011-
dc.description.abstract<p>Generating human videos from a single image while ensuring high visual quality and precise control is a challenging task, especially in complex scenarios involving multiple individuals and interactions with objects. Existing methods, while effective for single-human cases, often fail to handle the intricacies of multi-identity interactions because they struggle to associate the correct pairs of human appearance and pose condition and model the distribution of 3D-aware dynamics. To address these limitations, we present Structural Video Diffusion, a novel framework designed for generating realistic multi-human videos. Our approach introduces two core innovations: identity-specific embeddings to maintain consistent appearances across individuals and a structural learning mechanism that incorporates depth and surface-normal cues to model human-object interactions. Additionally, we expand existing human video dataset with 25K new videos featuring diverse multi-human and object interaction scenarios, providing a robust foundation for training. Experimental results demonstrate that Structural Video Diffusion achieves superior performance in generating lifelike, coherent videos for multiple subjects with dynamic and rich interactions, advancing the state of human-centric video generation.<br></p>-
dc.languageeng-
dc.relation.ispartofInternational Conference on Computer Vision (ICCV) (19/10/2025-23/10/2025, Honolulu, Hawai'i)-
dc.titleMulti-identity Human Image Animation with Structural Video Diffusion-
dc.typeConference_Paper-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats