File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: UNIFIED HUMAN-SCENE INTERACTION VIA PROMPTED CHAIN-OF-CONTACTS

TitleUNIFIED HUMAN-SCENE INTERACTION VIA PROMPTED CHAIN-OF-CONTACTS
Authors
Issue Date2024
Citation
12th International Conference on Learning Representations, ICLR 2024, 2024 How to Cite?
AbstractHuman-Scene Interaction (HSI) is a vital component of fields like embodied AI and virtual reality. Despite advancements in motion quality and physical plausibility, two pivotal factors, versatile interaction control and user-friendly interfaces, require further exploration for the practical application of HSI. This paper presents a unified HSI framework, named UniHSI, that supports unified control of diverse interactions through language commands. The framework defines interaction as “Chain of Contacts (CoC)”, representing steps involving human joint-object part pairs. This concept is inspired by the strong correlation between interaction types and corresponding contact regions. Based on the definition, UniHSI constitutes a Large Language Model (LLM) Planner to translate language prompts into task plans in the form of CoC, and a Unified Controller that turns CoC into uniform task execution. To support training and evaluation, we collect a new dataset named ScenePlan that encompasses thousands of task plans generated by LLMs based on diverse scenarios. Comprehensive experiments demonstrate the effectiveness of our framework in versatile task execution and generalizability to real scanned scenes.
Persistent Identifierhttp://hdl.handle.net/10722/352500

 

DC FieldValueLanguage
dc.contributor.authorXiao, Zeqi-
dc.contributor.authorWang, Tai-
dc.contributor.authorWang, Jingbo-
dc.contributor.authorCao, Jinkun-
dc.contributor.authorZhang, Wenwei-
dc.contributor.authorDai, Bo-
dc.contributor.authorLin, Dahua-
dc.contributor.authorPang, Jiangmiao-
dc.date.accessioned2024-12-16T03:59:29Z-
dc.date.available2024-12-16T03:59:29Z-
dc.date.issued2024-
dc.identifier.citation12th International Conference on Learning Representations, ICLR 2024, 2024-
dc.identifier.urihttp://hdl.handle.net/10722/352500-
dc.description.abstractHuman-Scene Interaction (HSI) is a vital component of fields like embodied AI and virtual reality. Despite advancements in motion quality and physical plausibility, two pivotal factors, versatile interaction control and user-friendly interfaces, require further exploration for the practical application of HSI. This paper presents a unified HSI framework, named UniHSI, that supports unified control of diverse interactions through language commands. The framework defines interaction as “Chain of Contacts (CoC)”, representing steps involving human joint-object part pairs. This concept is inspired by the strong correlation between interaction types and corresponding contact regions. Based on the definition, UniHSI constitutes a Large Language Model (LLM) Planner to translate language prompts into task plans in the form of CoC, and a Unified Controller that turns CoC into uniform task execution. To support training and evaluation, we collect a new dataset named ScenePlan that encompasses thousands of task plans generated by LLMs based on diverse scenarios. Comprehensive experiments demonstrate the effectiveness of our framework in versatile task execution and generalizability to real scanned scenes.-
dc.languageeng-
dc.relation.ispartof12th International Conference on Learning Representations, ICLR 2024-
dc.titleUNIFIED HUMAN-SCENE INTERACTION VIA PROMPTED CHAIN-OF-CONTACTS-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85189112121-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats