File Download
Supplementary

postgraduate thesis: Towards generalizable embodied AI system

TitleTowards generalizable embodied AI system
Authors
Advisors
Advisor(s):Luo, PWang, WP
Issue Date2025
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Citation
Mu, Y. [穆尧]. (2025). Towards generalizable embodied AI system. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.
AbstractEmbodied AI represents a crucial frontier in artificial intelligence research, aiming to create systems that can perceive, reason, and act within physical environments. Unlike traditional AI systems that operate purely in digital domains, embodied AI agents must navigate the complexities of real-world interactions, understanding spatial relationships, physical constraints, and multi-modal sensory inputs. In this dissertation, we study the problem of building generalizable embodied AI systems: developing effective embodied perception and cognition, efficient policy learning, and generalization in the real world. We tackle fundamental challenges in embodied AI system design under an integrated framework, including vision-language pre-training, policy learning, and sim-to-real transfer. We divide the dissertation into three parts. For Part I, we explore embodied perception and reasoning through two complementary approaches. In Chapter 2, we develop EmbodiedGPT for decomposing complex instructions into executable atomic skills through vision-language pre-training with embodied chain-of-thought capabilities. In Chapter 3, we introduce RoboCodeX for multimodal code generation that translates semantic understanding into robotic control codes. In Chapter Chapter 4, we present Emergent Communication for Embodied Control (EC²) that bridges visual demonstrations and symbolic language via emergent communication to establish a more natural connection between perceptual experiences and symbolic representations. For Part II, we focus on developing efficient and transferable policy learning approaches that enable robots to acquire skills with limited data while transferring knowledge between tasks. In Chapter 5, we introduce IDM (Imagining from Derived Memory), which improves sample efficiency and enhances policy robustness through imagination-based training with derived memory. Unlike previous approaches that rely solely on real experiences, IDM constructs a "memory prosthesis" to enrich the diversity of imagination without requiring additional environment interactions. In Chapter 6, we further develop CtrlFormer, which learns transferable state representations through a transformer-based architecture. By simultaneously learning visual features and policy representations across multiple tasks using an innovative attention mechanism, CtrlFormer enables effective knowledge transfer while preventing catastrophic forgetting. For Part III, we advance sim-to-real transfer to bridge the gap between simulation and real-world deployment. This encompasses both context learning for dynamics generalization and digital twin frameworks for reliable deployment. In Chapter 7, we propose DOMINO (DecOmposed Mutual INformation Optimization), a novel framework that improves generalization to unseen environments through decomposed mutual information optimization. By learning disentangled context vectors that capture different aspects of environmental variations, DOMINO enables more effective adaptation across diverse scenarios. In Chapter 8, we further introduce RoboTwin, a comprehensive framework that advances sim-to-real transfer through generative digital twins and spatially-aware code generation. Starting from 2D images, RoboTwin employs foundation models to generate diverse 3D assets, incorporates spatial annotations for precise manipulation, and leverages large language models for task decomposition and code generation. These works jointly address the fundamentals of general embodied AI systems from three different perspectives, which together form an integrated framework for improved perception and cognition, efficient policy learning, and generalizable real-world deployment.
DegreeDoctor of Philosophy
SubjectArtificial intelligence
Dept/ProgramComputer Science
Persistent Identifierhttp://hdl.handle.net/10722/356573

 

DC FieldValueLanguage
dc.contributor.advisorLuo, P-
dc.contributor.advisorWang, WP-
dc.contributor.authorMu, Yao-
dc.contributor.author穆尧-
dc.date.accessioned2025-06-05T09:31:11Z-
dc.date.available2025-06-05T09:31:11Z-
dc.date.issued2025-
dc.identifier.citationMu, Y. [穆尧]. (2025). Towards generalizable embodied AI system. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.-
dc.identifier.urihttp://hdl.handle.net/10722/356573-
dc.description.abstractEmbodied AI represents a crucial frontier in artificial intelligence research, aiming to create systems that can perceive, reason, and act within physical environments. Unlike traditional AI systems that operate purely in digital domains, embodied AI agents must navigate the complexities of real-world interactions, understanding spatial relationships, physical constraints, and multi-modal sensory inputs. In this dissertation, we study the problem of building generalizable embodied AI systems: developing effective embodied perception and cognition, efficient policy learning, and generalization in the real world. We tackle fundamental challenges in embodied AI system design under an integrated framework, including vision-language pre-training, policy learning, and sim-to-real transfer. We divide the dissertation into three parts. For Part I, we explore embodied perception and reasoning through two complementary approaches. In Chapter 2, we develop EmbodiedGPT for decomposing complex instructions into executable atomic skills through vision-language pre-training with embodied chain-of-thought capabilities. In Chapter 3, we introduce RoboCodeX for multimodal code generation that translates semantic understanding into robotic control codes. In Chapter Chapter 4, we present Emergent Communication for Embodied Control (EC²) that bridges visual demonstrations and symbolic language via emergent communication to establish a more natural connection between perceptual experiences and symbolic representations. For Part II, we focus on developing efficient and transferable policy learning approaches that enable robots to acquire skills with limited data while transferring knowledge between tasks. In Chapter 5, we introduce IDM (Imagining from Derived Memory), which improves sample efficiency and enhances policy robustness through imagination-based training with derived memory. Unlike previous approaches that rely solely on real experiences, IDM constructs a "memory prosthesis" to enrich the diversity of imagination without requiring additional environment interactions. In Chapter 6, we further develop CtrlFormer, which learns transferable state representations through a transformer-based architecture. By simultaneously learning visual features and policy representations across multiple tasks using an innovative attention mechanism, CtrlFormer enables effective knowledge transfer while preventing catastrophic forgetting. For Part III, we advance sim-to-real transfer to bridge the gap between simulation and real-world deployment. This encompasses both context learning for dynamics generalization and digital twin frameworks for reliable deployment. In Chapter 7, we propose DOMINO (DecOmposed Mutual INformation Optimization), a novel framework that improves generalization to unseen environments through decomposed mutual information optimization. By learning disentangled context vectors that capture different aspects of environmental variations, DOMINO enables more effective adaptation across diverse scenarios. In Chapter 8, we further introduce RoboTwin, a comprehensive framework that advances sim-to-real transfer through generative digital twins and spatially-aware code generation. Starting from 2D images, RoboTwin employs foundation models to generate diverse 3D assets, incorporates spatial annotations for precise manipulation, and leverages large language models for task decomposition and code generation. These works jointly address the fundamentals of general embodied AI systems from three different perspectives, which together form an integrated framework for improved perception and cognition, efficient policy learning, and generalizable real-world deployment.-
dc.languageeng-
dc.publisherThe University of Hong Kong (Pokfulam, Hong Kong)-
dc.relation.ispartofHKU Theses Online (HKUTO)-
dc.rightsThe author retains all proprietary rights, (such as patent rights) and the right to use in future works.-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subject.lcshArtificial intelligence-
dc.titleTowards generalizable embodied AI system-
dc.typePG_Thesis-
dc.description.thesisnameDoctor of Philosophy-
dc.description.thesislevelDoctoral-
dc.description.thesisdisciplineComputer Science-
dc.description.naturepublished_or_final_version-
dc.date.hkucongregation2025-
dc.identifier.mmsid991044970874803414-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats