File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Visual-tactile Sensing for Real-time Liquid Volume Estimation in Grasping

TitleVisual-tactile Sensing for Real-time Liquid Volume Estimation in Grasping
Authors
Issue Date23-Oct-2022
Abstract

We propose a deep visuo-tactile model for real-time estimation of the liquid inside a deformable container in a proprioceptive way. We fuse two sensory modalities, i.e., the raw visual inputs from the RGB camera and the tactile cues from our specific tactile sensor without any extra sensor calibrations. The robotic system is well controlled and adjusted based on the estimation model in real time. The main contributions and novelties of our work are listed as follows: 1) Explore a proprioceptive way for liquid volume estimation by developing an end-to-end predictive model with multi-modal convolutional networks, which achieve a high precision with an error of ~ 2 ml in the experimental validation. 2) Propose a multi-task learning architecture which comprehensively considers the losses from both classification and regression tasks, and comparatively evaluate the performance of each variant on the collected data and actual robotic platform. 3) Utilize the proprioceptive robotic system to accurately serve and control the requested volume of liquid, which is continuously flowing into a deformable container in real time. 4) Adaptively adjust the grasping plan to achieve more stable grasping and manipulation according to the real-time liquid volume prediction.


Persistent Identifierhttp://hdl.handle.net/10722/333848
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZhu, Fan-
dc.contributor.authorJia, Ruixing-
dc.contributor.authorYang, Lei-
dc.contributor.authorYan, Youcan-
dc.contributor.authorWang, Zheng-
dc.contributor.authorPan, Jia-
dc.contributor.authorWang, Wenping-
dc.date.accessioned2023-10-06T08:39:35Z-
dc.date.available2023-10-06T08:39:35Z-
dc.date.issued2022-10-23-
dc.identifier.urihttp://hdl.handle.net/10722/333848-
dc.description.abstract<p>We propose a deep visuo-tactile model for real-time estimation of the liquid inside a deformable container in a proprioceptive way. We fuse two sensory modalities, i.e., the raw visual inputs from the RGB camera and the tactile cues from our specific tactile sensor without any extra sensor calibrations. The robotic system is well controlled and adjusted based on the estimation model in real time. The main contributions and novelties of our work are listed as follows: 1) Explore a proprioceptive way for liquid volume estimation by developing an end-to-end predictive model with multi-modal convolutional networks, which achieve a high precision with an error of ~ 2 ml in the experimental validation. 2) Propose a multi-task learning architecture which comprehensively considers the losses from both classification and regression tasks, and comparatively evaluate the performance of each variant on the collected data and actual robotic platform. 3) Utilize the proprioceptive robotic system to accurately serve and control the requested volume of liquid, which is continuously flowing into a deformable container in real time. 4) Adaptively adjust the grasping plan to achieve more stable grasping and manipulation according to the real-time liquid volume prediction.<br></p>-
dc.languageeng-
dc.relation.ispartof2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022) (23/10/2022-27/10/2022, Kyoto)-
dc.titleVisual-tactile Sensing for Real-time Liquid Volume Estimation in Grasping-
dc.typeConference_Paper-
dc.identifier.doi10.1109/IROS47612.2022.9981153-
dc.identifier.scopuseid_2-s2.0-85146362738-
dc.identifier.volume2022-October-
dc.identifier.spage12542-
dc.identifier.epage12549-
dc.identifier.isiWOS:000909405303133-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats