File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Multi-task Hierarchical Adversarial Inverse Reinforcement Learning

TitleMulti-task Hierarchical Adversarial Inverse Reinforcement Learning
Authors
Issue Date2023
Citation
Proceedings of Machine Learning Research, 2023, v. 202, p. 4485-4513 How to Cite?
AbstractMulti-task Imitation Learning (MIL) aims to train a policy capable of performing a distribution of tasks based on multi-task expert demonstrations, which is essential for general-purpose robots. Existing MIL algorithms suffer from low data efficiency and poor performance on complex long-horizontal tasks. We develop Multi-task Hierarchical Adversarial Inverse Reinforcement Learning (MH-AIRL) to learn hierarchically-structured multi-task policies, which is more beneficial for compositional tasks with long horizons and has higher expert data efficiency through identifying and transferring reusable basic skills across tasks. To realize this, MH-AIRL effectively synthesizes context-based multi-task learning, AIRL (an IL approach), and hierarchical policy learning. Further, MH-AIRL can be adopted to demonstrations without the task or skill annotations (i.e., state-action pairs only) which are more accessible in practice. Theoretical justifications are provided for each module of MH-AIRL, and evaluations on challenging multi-task settings demonstrate superior performance and transferability of the multitask policies learned with MH-AIRL as compared to SOTA MIL baselines.
Persistent Identifierhttp://hdl.handle.net/10722/361760

 

DC FieldValueLanguage
dc.contributor.authorChen, Jiayu-
dc.contributor.authorTamboli, Dipesh-
dc.contributor.authorLan, Tian-
dc.contributor.authorAggarwal, Vaneet-
dc.date.accessioned2025-09-16T04:19:45Z-
dc.date.available2025-09-16T04:19:45Z-
dc.date.issued2023-
dc.identifier.citationProceedings of Machine Learning Research, 2023, v. 202, p. 4485-4513-
dc.identifier.urihttp://hdl.handle.net/10722/361760-
dc.description.abstractMulti-task Imitation Learning (MIL) aims to train a policy capable of performing a distribution of tasks based on multi-task expert demonstrations, which is essential for general-purpose robots. Existing MIL algorithms suffer from low data efficiency and poor performance on complex long-horizontal tasks. We develop Multi-task Hierarchical Adversarial Inverse Reinforcement Learning (MH-AIRL) to learn hierarchically-structured multi-task policies, which is more beneficial for compositional tasks with long horizons and has higher expert data efficiency through identifying and transferring reusable basic skills across tasks. To realize this, MH-AIRL effectively synthesizes context-based multi-task learning, AIRL (an IL approach), and hierarchical policy learning. Further, MH-AIRL can be adopted to demonstrations without the task or skill annotations (i.e., state-action pairs only) which are more accessible in practice. Theoretical justifications are provided for each module of MH-AIRL, and evaluations on challenging multi-task settings demonstrate superior performance and transferability of the multitask policies learned with MH-AIRL as compared to SOTA MIL baselines.-
dc.languageeng-
dc.relation.ispartofProceedings of Machine Learning Research-
dc.titleMulti-task Hierarchical Adversarial Inverse Reinforcement Learning-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85174390222-
dc.identifier.volume202-
dc.identifier.spage4485-
dc.identifier.epage4513-
dc.identifier.eissn2640-3498-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats