File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/978-3-031-72640-8_22
- Scopus: eid_2-s2.0-85209773532
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: MotionLCM: Real-Time Controllable Motion Generation via Latent Consistency Model
Title | MotionLCM: Real-Time Controllable Motion Generation via Latent Consistency Model |
---|---|
Authors | |
Keywords | Consistency Model Real-time Control Text-to-Motion |
Issue Date | 2025 |
Citation | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2025, v. 15074 LNCS, p. 390-408 How to Cite? |
Abstract | This work introduces MotionLCM, extending controllable motion generation to a real-time level. Existing methods for spatial-temporal control in text-conditioned motion generation suffer from significant runtime inefficiency. To address this issue, we first propose the motion latent consistency model (MotionLCM) for motion generation, building upon the latent diffusion model [9]. By adopting one-step (or few-step) inference, we further improve the runtime efficiency of the motion latent diffusion model for motion generation. To ensure effective controllability, we incorporate a motion ControlNet within the latent space of MotionLCM and enable explicit control signals (e.g., initial poses) in the vanilla motion space to control the generation process directly, similar to controlling other latent-free diffusion models [29, 73] for motion generation. By employing these techniques, our approach can generate human motions with text and control signals in real-time. Experimental results demonstrate the remarkable generation and controlling capabilities of MotionLCM while maintaining real-time runtime efficiency. |
Persistent Identifier | http://hdl.handle.net/10722/352486 |
ISSN | 2023 SCImago Journal Rankings: 0.606 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Dai, Wenxun | - |
dc.contributor.author | Chen, Ling Hao | - |
dc.contributor.author | Wang, Jingbo | - |
dc.contributor.author | Liu, Jinpeng | - |
dc.contributor.author | Dai, Bo | - |
dc.contributor.author | Tang, Yansong | - |
dc.date.accessioned | 2024-12-16T03:59:24Z | - |
dc.date.available | 2024-12-16T03:59:24Z | - |
dc.date.issued | 2025 | - |
dc.identifier.citation | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2025, v. 15074 LNCS, p. 390-408 | - |
dc.identifier.issn | 0302-9743 | - |
dc.identifier.uri | http://hdl.handle.net/10722/352486 | - |
dc.description.abstract | This work introduces MotionLCM, extending controllable motion generation to a real-time level. Existing methods for spatial-temporal control in text-conditioned motion generation suffer from significant runtime inefficiency. To address this issue, we first propose the motion latent consistency model (MotionLCM) for motion generation, building upon the latent diffusion model [9]. By adopting one-step (or few-step) inference, we further improve the runtime efficiency of the motion latent diffusion model for motion generation. To ensure effective controllability, we incorporate a motion ControlNet within the latent space of MotionLCM and enable explicit control signals (e.g., initial poses) in the vanilla motion space to control the generation process directly, similar to controlling other latent-free diffusion models [29, 73] for motion generation. By employing these techniques, our approach can generate human motions with text and control signals in real-time. Experimental results demonstrate the remarkable generation and controlling capabilities of MotionLCM while maintaining real-time runtime efficiency. | - |
dc.language | eng | - |
dc.relation.ispartof | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | - |
dc.subject | Consistency Model | - |
dc.subject | Real-time Control | - |
dc.subject | Text-to-Motion | - |
dc.title | MotionLCM: Real-Time Controllable Motion Generation via Latent Consistency Model | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1007/978-3-031-72640-8_22 | - |
dc.identifier.scopus | eid_2-s2.0-85209773532 | - |
dc.identifier.volume | 15074 LNCS | - |
dc.identifier.spage | 390 | - |
dc.identifier.epage | 408 | - |
dc.identifier.eissn | 1611-3349 | - |