File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/HUMANOIDS.2017.8246900
- Scopus: eid_2-s2.0-85044439277
- WOS: WOS:000427350100053
- Find via
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Emergence of human-comparable balancing behaviours by deep reinforcement learning
Title | Emergence of human-comparable balancing behaviours by deep reinforcement learning |
---|---|
Authors | |
Issue Date | 2017 |
Citation | IEEE-RAS International Conference on Humanoid Robots, 2017, p. 372-377 How to Cite? |
Abstract | © 2017 IEEE. This paper presents a hierarchical framework based on deep reinforcement learning that naturally acquires control policies that are capable of performing balancing behaviours such as ankle push-offs for humanoid robots, without explicit human design of controllers. Only the reward for training the neural network is specifically formulated based on the physical principles and quantities, and hence explainable. The successful emergence of human-comparable behaviours through the deep reinforcement learning demonstrates the feasibility of using an AI-based approach for humanoid motion control in a unified framework. Moreover, the balance strategies learned by reinforcement learning provides a larger range of disturbance rejection than that of the zero moment point based methods, suggesting a research direction of using learning-based controls to explore the optimal performance. |
Persistent Identifier | http://hdl.handle.net/10722/288921 |
ISSN | 2020 SCImago Journal Rankings: 0.323 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yang, Chuanyu | - |
dc.contributor.author | Komura, Taku | - |
dc.contributor.author | Li, Zhibin | - |
dc.date.accessioned | 2020-10-12T08:06:13Z | - |
dc.date.available | 2020-10-12T08:06:13Z | - |
dc.date.issued | 2017 | - |
dc.identifier.citation | IEEE-RAS International Conference on Humanoid Robots, 2017, p. 372-377 | - |
dc.identifier.issn | 2164-0572 | - |
dc.identifier.uri | http://hdl.handle.net/10722/288921 | - |
dc.description.abstract | © 2017 IEEE. This paper presents a hierarchical framework based on deep reinforcement learning that naturally acquires control policies that are capable of performing balancing behaviours such as ankle push-offs for humanoid robots, without explicit human design of controllers. Only the reward for training the neural network is specifically formulated based on the physical principles and quantities, and hence explainable. The successful emergence of human-comparable behaviours through the deep reinforcement learning demonstrates the feasibility of using an AI-based approach for humanoid motion control in a unified framework. Moreover, the balance strategies learned by reinforcement learning provides a larger range of disturbance rejection than that of the zero moment point based methods, suggesting a research direction of using learning-based controls to explore the optimal performance. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE-RAS International Conference on Humanoid Robots | - |
dc.title | Emergence of human-comparable balancing behaviours by deep reinforcement learning | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/HUMANOIDS.2017.8246900 | - |
dc.identifier.scopus | eid_2-s2.0-85044439277 | - |
dc.identifier.spage | 372 | - |
dc.identifier.epage | 377 | - |
dc.identifier.eissn | 2164-0580 | - |
dc.identifier.isi | WOS:000427350100053 | - |
dc.identifier.issnl | 2164-0572 | - |