File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Interactive Character Control with Auto-Regressive Motion Diffusion Models

TitleInteractive Character Control with Auto-Regressive Motion Diffusion Models
Authors
Keywordsdiffusion model
motion synthesis
reinforcement learning
Issue Date2024
Citation
ACM Transactions on Graphics, 2024, v. 43, n. 4, article no. 143 How to Cite?
AbstractReal-time character control is an essential component for interactive experiences, with a broad range of applications, including physics simulations, video games, and virtual reality. The success of diffusion models for image synthesis has led to the use of these models for motion synthesis. However, the majority of these motion diffusion models are primarily designed for offline applications, where space-time models are used to synthesize an entire sequence of frames simultaneously with a pre-specified length. To enable real-time motion synthesis with diffusion model that allows time-varying controls, we propose A-MDM (Auto-regressive Motion Diffusion Model). Our conditional diffusion model takes an initial pose as input, and auto-regressively generates successive motion frames conditioned on the previous frame. Despite its streamlined network architecture, which uses simple MLPs, our framework is capable of generating diverse, long-horizon, and high-fidelity motion sequences. Furthermore, we introduce a suite of techniques for incorporating interactive controls into A-MDM, such as task-oriented sampling, in-painting, and hierarchical reinforcement learning (See Figure 1). These techniques enable a pre-trained A-MDM to be efficiently adapted for a variety of new downstream tasks. We conduct a comprehensive suite of experiments to demonstrate the effectiveness of A-MDM, and compare its performance against state-of-the-art auto-regressive methods.
Persistent Identifierhttp://hdl.handle.net/10722/352450
ISSN
2023 Impact Factor: 7.8
2023 SCImago Journal Rankings: 7.766

 

DC FieldValueLanguage
dc.contributor.authorShi, Yi-
dc.contributor.authorWang, Jingbo-
dc.contributor.authorJiang, Xuekun-
dc.contributor.authorLin, Bingkun-
dc.contributor.authorDai, Bo-
dc.contributor.authorPeng, Xue Bin-
dc.date.accessioned2024-12-16T03:59:04Z-
dc.date.available2024-12-16T03:59:04Z-
dc.date.issued2024-
dc.identifier.citationACM Transactions on Graphics, 2024, v. 43, n. 4, article no. 143-
dc.identifier.issn0730-0301-
dc.identifier.urihttp://hdl.handle.net/10722/352450-
dc.description.abstractReal-time character control is an essential component for interactive experiences, with a broad range of applications, including physics simulations, video games, and virtual reality. The success of diffusion models for image synthesis has led to the use of these models for motion synthesis. However, the majority of these motion diffusion models are primarily designed for offline applications, where space-time models are used to synthesize an entire sequence of frames simultaneously with a pre-specified length. To enable real-time motion synthesis with diffusion model that allows time-varying controls, we propose A-MDM (Auto-regressive Motion Diffusion Model). Our conditional diffusion model takes an initial pose as input, and auto-regressively generates successive motion frames conditioned on the previous frame. Despite its streamlined network architecture, which uses simple MLPs, our framework is capable of generating diverse, long-horizon, and high-fidelity motion sequences. Furthermore, we introduce a suite of techniques for incorporating interactive controls into A-MDM, such as task-oriented sampling, in-painting, and hierarchical reinforcement learning (See Figure 1). These techniques enable a pre-trained A-MDM to be efficiently adapted for a variety of new downstream tasks. We conduct a comprehensive suite of experiments to demonstrate the effectiveness of A-MDM, and compare its performance against state-of-the-art auto-regressive methods.-
dc.languageeng-
dc.relation.ispartofACM Transactions on Graphics-
dc.subjectdiffusion model-
dc.subjectmotion synthesis-
dc.subjectreinforcement learning-
dc.titleInteractive Character Control with Auto-Regressive Motion Diffusion Models-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3658140-
dc.identifier.scopuseid_2-s2.0-85199193893-
dc.identifier.volume43-
dc.identifier.issue4-
dc.identifier.spagearticle no. 143-
dc.identifier.epagearticle no. 143-
dc.identifier.eissn1557-7368-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats