File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3658209
- Scopus: eid_2-s2.0-85199366679
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Categorical Codebook Matching for Embodied Character Controllers
| Title | Categorical Codebook Matching for Embodied Character Controllers |
|---|---|
| Authors | |
| Keywords | character animation character control character interactions deep learning human motion neural networks |
| Issue Date | 19-Jul-2024 |
| Publisher | Association for Computing Machinery (ACM) |
| Citation | ACM Transactions on Graphics, 2024, v. 43, n. 4 How to Cite? |
| Abstract | Translating motions from a real user onto a virtual embodied avatar is a key challenge for character animation in the metaverse. In this work, we present a novel generative framework that enables mapping from a set of sparse sensor signals to a full body avatar motion in real-time while faithfully preserving the motion context of the user. In contrast to existing techniques that require training a motion prior and its mapping from control to motion separately, our framework is able to learn the motion manifold as well as how to sample from it at the same time in an end-to-end manner. To achieve that, we introduce a technique called codebook matching which matches the probability distribution between two categorical codebooks for the inputs and outputs for synthesizing the character motions. We demonstrate this technique can successfully handle ambiguity in motion generation and produce high quality character controllers from unstructured motion capture data. Our method is especially useful for interactive applications like virtual reality or video games where high accuracy and responsiveness are needed. |
| Persistent Identifier | http://hdl.handle.net/10722/362417 |
| ISSN | 2023 Impact Factor: 7.8 2023 SCImago Journal Rankings: 7.766 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Starke, Sebastian | - |
| dc.contributor.author | Starke, Paul | - |
| dc.contributor.author | He, Nicky | - |
| dc.contributor.author | Komura, Taku | - |
| dc.contributor.author | Ye, Yuting | - |
| dc.date.accessioned | 2025-09-24T00:51:23Z | - |
| dc.date.available | 2025-09-24T00:51:23Z | - |
| dc.date.issued | 2024-07-19 | - |
| dc.identifier.citation | ACM Transactions on Graphics, 2024, v. 43, n. 4 | - |
| dc.identifier.issn | 0730-0301 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/362417 | - |
| dc.description.abstract | Translating motions from a real user onto a virtual embodied avatar is a key challenge for character animation in the metaverse. In this work, we present a novel generative framework that enables mapping from a set of sparse sensor signals to a full body avatar motion in real-time while faithfully preserving the motion context of the user. In contrast to existing techniques that require training a motion prior and its mapping from control to motion separately, our framework is able to learn the motion manifold as well as how to sample from it at the same time in an end-to-end manner. To achieve that, we introduce a technique called codebook matching which matches the probability distribution between two categorical codebooks for the inputs and outputs for synthesizing the character motions. We demonstrate this technique can successfully handle ambiguity in motion generation and produce high quality character controllers from unstructured motion capture data. Our method is especially useful for interactive applications like virtual reality or video games where high accuracy and responsiveness are needed. | - |
| dc.language | eng | - |
| dc.publisher | Association for Computing Machinery (ACM) | - |
| dc.relation.ispartof | ACM Transactions on Graphics | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | character animation | - |
| dc.subject | character control | - |
| dc.subject | character interactions | - |
| dc.subject | deep learning | - |
| dc.subject | human motion | - |
| dc.subject | neural networks | - |
| dc.title | Categorical Codebook Matching for Embodied Character Controllers | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1145/3658209 | - |
| dc.identifier.scopus | eid_2-s2.0-85199366679 | - |
| dc.identifier.volume | 43 | - |
| dc.identifier.issue | 4 | - |
| dc.identifier.eissn | 1557-7368 | - |
| dc.identifier.issnl | 0730-0301 | - |
