File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3658140
- Scopus: eid_2-s2.0-85199193893
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Interactive Character Control with Auto-Regressive Motion Diffusion Models
Title | Interactive Character Control with Auto-Regressive Motion Diffusion Models |
---|---|
Authors | |
Keywords | diffusion model motion synthesis reinforcement learning |
Issue Date | 2024 |
Citation | ACM Transactions on Graphics, 2024, v. 43, n. 4, article no. 143 How to Cite? |
Abstract | Real-time character control is an essential component for interactive experiences, with a broad range of applications, including physics simulations, video games, and virtual reality. The success of diffusion models for image synthesis has led to the use of these models for motion synthesis. However, the majority of these motion diffusion models are primarily designed for offline applications, where space-time models are used to synthesize an entire sequence of frames simultaneously with a pre-specified length. To enable real-time motion synthesis with diffusion model that allows time-varying controls, we propose A-MDM (Auto-regressive Motion Diffusion Model). Our conditional diffusion model takes an initial pose as input, and auto-regressively generates successive motion frames conditioned on the previous frame. Despite its streamlined network architecture, which uses simple MLPs, our framework is capable of generating diverse, long-horizon, and high-fidelity motion sequences. Furthermore, we introduce a suite of techniques for incorporating interactive controls into A-MDM, such as task-oriented sampling, in-painting, and hierarchical reinforcement learning (See Figure 1). These techniques enable a pre-trained A-MDM to be efficiently adapted for a variety of new downstream tasks. We conduct a comprehensive suite of experiments to demonstrate the effectiveness of A-MDM, and compare its performance against state-of-the-art auto-regressive methods. |
Persistent Identifier | http://hdl.handle.net/10722/352450 |
ISSN | 2023 Impact Factor: 7.8 2023 SCImago Journal Rankings: 7.766 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Shi, Yi | - |
dc.contributor.author | Wang, Jingbo | - |
dc.contributor.author | Jiang, Xuekun | - |
dc.contributor.author | Lin, Bingkun | - |
dc.contributor.author | Dai, Bo | - |
dc.contributor.author | Peng, Xue Bin | - |
dc.date.accessioned | 2024-12-16T03:59:04Z | - |
dc.date.available | 2024-12-16T03:59:04Z | - |
dc.date.issued | 2024 | - |
dc.identifier.citation | ACM Transactions on Graphics, 2024, v. 43, n. 4, article no. 143 | - |
dc.identifier.issn | 0730-0301 | - |
dc.identifier.uri | http://hdl.handle.net/10722/352450 | - |
dc.description.abstract | Real-time character control is an essential component for interactive experiences, with a broad range of applications, including physics simulations, video games, and virtual reality. The success of diffusion models for image synthesis has led to the use of these models for motion synthesis. However, the majority of these motion diffusion models are primarily designed for offline applications, where space-time models are used to synthesize an entire sequence of frames simultaneously with a pre-specified length. To enable real-time motion synthesis with diffusion model that allows time-varying controls, we propose A-MDM (Auto-regressive Motion Diffusion Model). Our conditional diffusion model takes an initial pose as input, and auto-regressively generates successive motion frames conditioned on the previous frame. Despite its streamlined network architecture, which uses simple MLPs, our framework is capable of generating diverse, long-horizon, and high-fidelity motion sequences. Furthermore, we introduce a suite of techniques for incorporating interactive controls into A-MDM, such as task-oriented sampling, in-painting, and hierarchical reinforcement learning (See Figure 1). These techniques enable a pre-trained A-MDM to be efficiently adapted for a variety of new downstream tasks. We conduct a comprehensive suite of experiments to demonstrate the effectiveness of A-MDM, and compare its performance against state-of-the-art auto-regressive methods. | - |
dc.language | eng | - |
dc.relation.ispartof | ACM Transactions on Graphics | - |
dc.subject | diffusion model | - |
dc.subject | motion synthesis | - |
dc.subject | reinforcement learning | - |
dc.title | Interactive Character Control with Auto-Regressive Motion Diffusion Models | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1145/3658140 | - |
dc.identifier.scopus | eid_2-s2.0-85199193893 | - |
dc.identifier.volume | 43 | - |
dc.identifier.issue | 4 | - |
dc.identifier.spage | article no. 143 | - |
dc.identifier.epage | article no. 143 | - |
dc.identifier.eissn | 1557-7368 | - |