File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1017/S0263574722001230
- Scopus: eid_2-s2.0-85148014080
- WOS: WOS:000850705000001
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Zero-shot sim-to-real transfer of reinforcement learning framework for robotics manipulation with demonstration and force feedback
Title | Zero-shot sim-to-real transfer of reinforcement learning framework for robotics manipulation with demonstration and force feedback |
---|---|
Authors | |
Keywords | digital twin reinforcement learning sim-to-real transfer |
Issue Date | 7-Sep-2022 |
Publisher | Cambridge University Press |
Citation | Robotica, 2022, v. 41, n. 3, p. 1015-1024 How to Cite? |
Abstract | In the field of robot reinforcement learning (RL), the reality gap has always been a problem that restricts the robustness and generalization of algorithms. We propose Simulation Twin (SimTwin) : a deep RL framework that can help directly transfer the model from simulation to reality without any real-world training. SimTwin consists of a RL module and an adaptive correct module. We train the policy using the soft actor-critic algorithm only in a simulator with demonstration and domain randomization. In the adaptive correct module, we design and train a neural network to simulate the human error correction process using force feedback. Subsequently, we combine the above two modules through digital twin to control real-world robots, correct simulator parameters by comparing the difference between simulator and reality automatically, and then generalize the correct action through the trained policy network without additional training. We demonstrate the proposed method in an open cabinet task; the experiments show that our framework can reduce the reality gap without any real-world training. |
Persistent Identifier | http://hdl.handle.net/10722/337637 |
ISSN | 2023 Impact Factor: 1.9 2023 SCImago Journal Rankings: 0.579 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chen, YP | - |
dc.contributor.author | Zeng, C | - |
dc.contributor.author | Wang, ZP | - |
dc.contributor.author | Lu, P | - |
dc.contributor.author | Yang, CG | - |
dc.date.accessioned | 2024-03-11T10:22:42Z | - |
dc.date.available | 2024-03-11T10:22:42Z | - |
dc.date.issued | 2022-09-07 | - |
dc.identifier.citation | Robotica, 2022, v. 41, n. 3, p. 1015-1024 | - |
dc.identifier.issn | 0263-5747 | - |
dc.identifier.uri | http://hdl.handle.net/10722/337637 | - |
dc.description.abstract | <p>In the field of robot reinforcement learning (RL), the reality gap has always been a problem that restricts the robustness and generalization of algorithms. We propose Simulation Twin (SimTwin) : a deep RL framework that can help directly transfer the model from simulation to reality without any real-world training. SimTwin consists of a RL module and an adaptive correct module. We train the policy using the soft actor-critic algorithm only in a simulator with demonstration and domain randomization. In the adaptive correct module, we design and train a neural network to simulate the human error correction process using force feedback. Subsequently, we combine the above two modules through digital twin to control real-world robots, correct simulator parameters by comparing the difference between simulator and reality automatically, and then generalize the correct action through the trained policy network without additional training. We demonstrate the proposed method in an open cabinet task; the experiments show that our framework can reduce the reality gap without any real-world training.</p> | - |
dc.language | eng | - |
dc.publisher | Cambridge University Press | - |
dc.relation.ispartof | Robotica | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject | digital twin | - |
dc.subject | reinforcement learning | - |
dc.subject | sim-to-real transfer | - |
dc.title | Zero-shot sim-to-real transfer of reinforcement learning framework for robotics manipulation with demonstration and force feedback | - |
dc.type | Article | - |
dc.identifier.doi | 10.1017/S0263574722001230 | - |
dc.identifier.scopus | eid_2-s2.0-85148014080 | - |
dc.identifier.volume | 41 | - |
dc.identifier.issue | 3 | - |
dc.identifier.spage | 1015 | - |
dc.identifier.epage | 1024 | - |
dc.identifier.eissn | 1469-8668 | - |
dc.identifier.isi | WOS:000850705000001 | - |
dc.publisher.place | NEW YORK | - |
dc.identifier.issnl | 0263-5747 | - |