File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Intelligent robotic sonographer: Mutual information-based disentangled reward learning from few demonstrations

TitleIntelligent robotic sonographer: Mutual information-based disentangled reward learning from few demonstrations
Authors
Keywordslatent feature disentanglement
learning from demonstration
medical robotics
Robotic ultrasound
Issue Date2024
Citation
International Journal of Robotics Research, 2024, v. 43, n. 7, p. 981-1002 How to Cite?
AbstractUltrasound (US) imaging is widely used for biometric measurement and diagnosis of internal organs due to the advantages of being real-time and radiation-free. However, due to inter-operator variations, resulting images highly depend on the experience of sonographers. This work proposes an intelligent robotic sonographer to autonomously “explore” target anatomies and navigate a US probe to standard planes by learning from the expert. The underlying high-level physiological knowledge from experts is inferred by a neural reward function, using a ranked pairwise image comparison approach in a self-supervised fashion. This process can be referred to as understanding the “language of sonography.” Considering the generalization capability to overcome inter-patient variations, mutual information is estimated by a network to explicitly disentangle the task-related and domain features in latent space. The robotic localization is carried out in coarse-to-fine mode based on the predicted reward associated with B-mode images. To validate the effectiveness of the proposed reward inference network, representative experiments were performed on vascular phantoms (“line” target), two types of ex vivo animal organ phantoms (chicken heart and lamb kidney representing “point” target), and in vivo human carotids. To further validate the performance of the autonomous acquisition framework, physical robotic acquisitions were performed on three phantoms (vascular, chicken heart, and lamb kidney). The results demonstrated that the proposed advanced framework can robustly work on a variety of seen and unseen phantoms as well as in vivo human carotid data. Code: https://github.com/yuan-12138/MI-GPSR. Video: https://youtu.be/u4ThAA9onE0.
Persistent Identifierhttp://hdl.handle.net/10722/365355
ISSN
2023 Impact Factor: 7.5
2023 SCImago Journal Rankings: 4.346

 

DC FieldValueLanguage
dc.contributor.authorJiang, Zhongliang-
dc.contributor.authorBi, Yuan-
dc.contributor.authorZhou, Mingchuan-
dc.contributor.authorHu, Ying-
dc.contributor.authorBurke, Michael-
dc.contributor.authorNavab, Nassir-
dc.date.accessioned2025-11-05T06:55:36Z-
dc.date.available2025-11-05T06:55:36Z-
dc.date.issued2024-
dc.identifier.citationInternational Journal of Robotics Research, 2024, v. 43, n. 7, p. 981-1002-
dc.identifier.issn0278-3649-
dc.identifier.urihttp://hdl.handle.net/10722/365355-
dc.description.abstractUltrasound (US) imaging is widely used for biometric measurement and diagnosis of internal organs due to the advantages of being real-time and radiation-free. However, due to inter-operator variations, resulting images highly depend on the experience of sonographers. This work proposes an intelligent robotic sonographer to autonomously “explore” target anatomies and navigate a US probe to standard planes by learning from the expert. The underlying high-level physiological knowledge from experts is inferred by a neural reward function, using a ranked pairwise image comparison approach in a self-supervised fashion. This process can be referred to as understanding the “language of sonography.” Considering the generalization capability to overcome inter-patient variations, mutual information is estimated by a network to explicitly disentangle the task-related and domain features in latent space. The robotic localization is carried out in coarse-to-fine mode based on the predicted reward associated with B-mode images. To validate the effectiveness of the proposed reward inference network, representative experiments were performed on vascular phantoms (“line” target), two types of ex vivo animal organ phantoms (chicken heart and lamb kidney representing “point” target), and in vivo human carotids. To further validate the performance of the autonomous acquisition framework, physical robotic acquisitions were performed on three phantoms (vascular, chicken heart, and lamb kidney). The results demonstrated that the proposed advanced framework can robustly work on a variety of seen and unseen phantoms as well as in vivo human carotid data. Code: https://github.com/yuan-12138/MI-GPSR. Video: https://youtu.be/u4ThAA9onE0.-
dc.languageeng-
dc.relation.ispartofInternational Journal of Robotics Research-
dc.subjectlatent feature disentanglement-
dc.subjectlearning from demonstration-
dc.subjectmedical robotics-
dc.subjectRobotic ultrasound-
dc.titleIntelligent robotic sonographer: Mutual information-based disentangled reward learning from few demonstrations-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1177/02783649231223547-
dc.identifier.scopuseid_2-s2.0-85182864384-
dc.identifier.volume43-
dc.identifier.issue7-
dc.identifier.spage981-
dc.identifier.epage1002-
dc.identifier.eissn1741-3176-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats