File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Categorical Codebook Matching for Embodied Character Controllers

TitleCategorical Codebook Matching for Embodied Character Controllers
Authors
Keywordscharacter animation
character control
character interactions
deep learning
human motion
neural networks
Issue Date19-Jul-2024
PublisherAssociation for Computing Machinery (ACM)
Citation
ACM Transactions on Graphics, 2024, v. 43, n. 4 How to Cite?
AbstractTranslating motions from a real user onto a virtual embodied avatar is a key challenge for character animation in the metaverse. In this work, we present a novel generative framework that enables mapping from a set of sparse sensor signals to a full body avatar motion in real-time while faithfully preserving the motion context of the user. In contrast to existing techniques that require training a motion prior and its mapping from control to motion separately, our framework is able to learn the motion manifold as well as how to sample from it at the same time in an end-to-end manner. To achieve that, we introduce a technique called codebook matching which matches the probability distribution between two categorical codebooks for the inputs and outputs for synthesizing the character motions. We demonstrate this technique can successfully handle ambiguity in motion generation and produce high quality character controllers from unstructured motion capture data. Our method is especially useful for interactive applications like virtual reality or video games where high accuracy and responsiveness are needed.
Persistent Identifierhttp://hdl.handle.net/10722/362417
ISSN
2023 Impact Factor: 7.8
2023 SCImago Journal Rankings: 7.766

 

DC FieldValueLanguage
dc.contributor.authorStarke, Sebastian-
dc.contributor.authorStarke, Paul-
dc.contributor.authorHe, Nicky-
dc.contributor.authorKomura, Taku-
dc.contributor.authorYe, Yuting-
dc.date.accessioned2025-09-24T00:51:23Z-
dc.date.available2025-09-24T00:51:23Z-
dc.date.issued2024-07-19-
dc.identifier.citationACM Transactions on Graphics, 2024, v. 43, n. 4-
dc.identifier.issn0730-0301-
dc.identifier.urihttp://hdl.handle.net/10722/362417-
dc.description.abstractTranslating motions from a real user onto a virtual embodied avatar is a key challenge for character animation in the metaverse. In this work, we present a novel generative framework that enables mapping from a set of sparse sensor signals to a full body avatar motion in real-time while faithfully preserving the motion context of the user. In contrast to existing techniques that require training a motion prior and its mapping from control to motion separately, our framework is able to learn the motion manifold as well as how to sample from it at the same time in an end-to-end manner. To achieve that, we introduce a technique called codebook matching which matches the probability distribution between two categorical codebooks for the inputs and outputs for synthesizing the character motions. We demonstrate this technique can successfully handle ambiguity in motion generation and produce high quality character controllers from unstructured motion capture data. Our method is especially useful for interactive applications like virtual reality or video games where high accuracy and responsiveness are needed.-
dc.languageeng-
dc.publisherAssociation for Computing Machinery (ACM)-
dc.relation.ispartofACM Transactions on Graphics-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectcharacter animation-
dc.subjectcharacter control-
dc.subjectcharacter interactions-
dc.subjectdeep learning-
dc.subjecthuman motion-
dc.subjectneural networks-
dc.titleCategorical Codebook Matching for Embodied Character Controllers-
dc.typeArticle-
dc.identifier.doi10.1145/3658209-
dc.identifier.scopuseid_2-s2.0-85199366679-
dc.identifier.volume43-
dc.identifier.issue4-
dc.identifier.eissn1557-7368-
dc.identifier.issnl0730-0301-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats