File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/HUMANOIDS.2018.8625045
- Scopus: eid_2-s2.0-85062266617
- WOS: WOS:000458689700111
- Find via
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Learning Whole-Body Motor Skills for Humanoids
Title | Learning Whole-Body Motor Skills for Humanoids |
---|---|
Authors | |
Issue Date | 2018 |
Citation | IEEE-RAS International Conference on Humanoid Robots, 2018, p. 776-783 How to Cite? |
Abstract | © 2018 IEEE. This paper presents a hierarchical framework for Deep Reinforcement Learning that acquires motor skills for a variety of push recovery and balancing behaviors, i.e., ankle, hip, foot tilting, and stepping strategies. The policy is trained in a physics simulator with realistic setting of robot model and low-level impedance control that are easy to transfer the learned skills to real robots. The advantage over traditional methods is the integration of high-level planner and feedback control all in one single coherent policy network, which is generic for learning versatile balancing and recovery motions against unknown perturbations at arbitrary locations (e.g., legs, torso). Furthermore, the proposed framework allows the policy to be learned quickly by many state-of-the-art learning algorithms. By comparing our learned results to studies of preprogrammed, special-purpose controllers in the literature, self-learned skills are comparable in terms of disturbance rejection but with additional advantages of producing a wide range of adaptive, versatile and robust behaviors. |
Persistent Identifier | http://hdl.handle.net/10722/288933 |
ISSN | 2020 SCImago Journal Rankings: 0.323 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yang, Chuanyu | - |
dc.contributor.author | Yuan, Kai | - |
dc.contributor.author | Merkt, Wolfgang | - |
dc.contributor.author | Komura, Taku | - |
dc.contributor.author | Vijayakumar, Sethu | - |
dc.contributor.author | Li, Zhibin | - |
dc.date.accessioned | 2020-10-12T08:06:15Z | - |
dc.date.available | 2020-10-12T08:06:15Z | - |
dc.date.issued | 2018 | - |
dc.identifier.citation | IEEE-RAS International Conference on Humanoid Robots, 2018, p. 776-783 | - |
dc.identifier.issn | 2164-0572 | - |
dc.identifier.uri | http://hdl.handle.net/10722/288933 | - |
dc.description.abstract | © 2018 IEEE. This paper presents a hierarchical framework for Deep Reinforcement Learning that acquires motor skills for a variety of push recovery and balancing behaviors, i.e., ankle, hip, foot tilting, and stepping strategies. The policy is trained in a physics simulator with realistic setting of robot model and low-level impedance control that are easy to transfer the learned skills to real robots. The advantage over traditional methods is the integration of high-level planner and feedback control all in one single coherent policy network, which is generic for learning versatile balancing and recovery motions against unknown perturbations at arbitrary locations (e.g., legs, torso). Furthermore, the proposed framework allows the policy to be learned quickly by many state-of-the-art learning algorithms. By comparing our learned results to studies of preprogrammed, special-purpose controllers in the literature, self-learned skills are comparable in terms of disturbance rejection but with additional advantages of producing a wide range of adaptive, versatile and robust behaviors. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE-RAS International Conference on Humanoid Robots | - |
dc.title | Learning Whole-Body Motor Skills for Humanoids | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/HUMANOIDS.2018.8625045 | - |
dc.identifier.scopus | eid_2-s2.0-85062266617 | - |
dc.identifier.spage | 776 | - |
dc.identifier.epage | 783 | - |
dc.identifier.eissn | 2164-0580 | - |
dc.identifier.isi | WOS:000458689700111 | - |
dc.identifier.issnl | 2164-0572 | - |