File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Living Object Grasping Using Two-Stage Graph Reinforcement Learning

TitleLiving Object Grasping Using Two-Stage Graph Reinforcement Learning
Authors
KeywordsDeep learning in grasping and manipulation
dexterous manipulation
grasping
in-hand manipulation
reinforcement learning
Issue Date2021
PublisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at https://www.ieee.org/membership-catalog/productdetail/showProductDetailPage.html?product=PER481-ELE
Citation
IEEE Robotics and Automation Letters, 2021, v. 6 n. 2, p. 1950-1957 How to Cite?
AbstractLiving objects are hard to grasp because they can actively dodge and struggle by writhing or deforming while or even prior to being contacted and modeling or predicting their responses to grasping is extremely difficult. This letter presents an algorithm based on reinforcement learning (RL) to attack this challenging problem. Considering the complexity of living object grasping, we divide the whole task into pre-grasp and in-hand stages and let the algorithm switch between the stages automatically. The pre-grasp stage is aimed at finding a good pose of a robot hand approaching a living object for performing a grasp. Dense reward functions are proposed for facilitating the learning of right hand actions based on the poses of both hand and object. Since an object held in hand may struggle to escape, the robot hand needs to adjust its configuration and respond correctly to the object's movement. Hence, the goal of the in-hand stage is to determine an appropriate adjustment of finger configuration in order for the robot hand to keep holding the object. At this stage, we treat the robot hand as a graph and use the graph convolutional network (GCN) to determine the hand action. We test our algorithm with both simulation and real experiments, which show its good performance in living object grasping. More results are available on our website: https://sites.google.com/view/graph-rl.
Persistent Identifierhttp://hdl.handle.net/10722/300572
ISSN
2021 Impact Factor: 4.321
2020 SCImago Journal Rankings: 1.123
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorHu, Z-
dc.contributor.authorZheng, Y-
dc.contributor.authorPan, J-
dc.date.accessioned2021-06-18T14:53:55Z-
dc.date.available2021-06-18T14:53:55Z-
dc.date.issued2021-
dc.identifier.citationIEEE Robotics and Automation Letters, 2021, v. 6 n. 2, p. 1950-1957-
dc.identifier.issn2377-3766-
dc.identifier.urihttp://hdl.handle.net/10722/300572-
dc.description.abstractLiving objects are hard to grasp because they can actively dodge and struggle by writhing or deforming while or even prior to being contacted and modeling or predicting their responses to grasping is extremely difficult. This letter presents an algorithm based on reinforcement learning (RL) to attack this challenging problem. Considering the complexity of living object grasping, we divide the whole task into pre-grasp and in-hand stages and let the algorithm switch between the stages automatically. The pre-grasp stage is aimed at finding a good pose of a robot hand approaching a living object for performing a grasp. Dense reward functions are proposed for facilitating the learning of right hand actions based on the poses of both hand and object. Since an object held in hand may struggle to escape, the robot hand needs to adjust its configuration and respond correctly to the object's movement. Hence, the goal of the in-hand stage is to determine an appropriate adjustment of finger configuration in order for the robot hand to keep holding the object. At this stage, we treat the robot hand as a graph and use the graph convolutional network (GCN) to determine the hand action. We test our algorithm with both simulation and real experiments, which show its good performance in living object grasping. More results are available on our website: https://sites.google.com/view/graph-rl.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at https://www.ieee.org/membership-catalog/productdetail/showProductDetailPage.html?product=PER481-ELE-
dc.relation.ispartofIEEE Robotics and Automation Letters-
dc.rightsIEEE Robotics and Automation Letters. Copyright © Institute of Electrical and Electronics Engineers.-
dc.rights©20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectDeep learning in grasping and manipulation-
dc.subjectdexterous manipulation-
dc.subjectgrasping-
dc.subjectin-hand manipulation-
dc.subjectreinforcement learning-
dc.titleLiving Object Grasping Using Two-Stage Graph Reinforcement Learning-
dc.typeArticle-
dc.identifier.emailPan, J: jpan@cs.hku.hk-
dc.identifier.authorityPan, J=rp01984-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/LRA.2021.3060636-
dc.identifier.scopuseid_2-s2.0-85101736179-
dc.identifier.hkuros323044-
dc.identifier.volume6-
dc.identifier.issue2-
dc.identifier.spage1950-
dc.identifier.epage1957-
dc.identifier.isiWOS:000629028400033-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats