File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Dual-teacher: Integrating intra-domain and inter-domain teachers for annotation-efficient cardiac segmentation

TitleDual-teacher: Integrating intra-domain and inter-domain teachers for annotation-efficient cardiac segmentation
Authors
KeywordsCardiac segmentation
Cross-modality segmentation
Semi-supervised domain adaptation
Issue Date2020
PublisherSpringer.
Citation
23rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2020), Lima, Peru, 4-8 October 2020. In Martel, AL, Abolmaesumi, P, Stoyanov, D, et al. (Eds.), Medical Image Computing and Computer Assisted Intervention – MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part I, p. 418-427. Cham, Switzerland: Springer, 2020 How to Cite?
AbstractMedical image annotations are prohibitively time-consuming and expensive to obtain. To alleviate annotation scarcity, many approaches have been developed to efficiently utilize extra information, e.g., semi-supervised learning further exploring plentiful unlabeled data, domain adaptation including multi-modality learning and unsupervised domain adaptation resorting to the prior knowledge from additional modality. In this paper, we aim to investigate the feasibility of simultaneously leveraging abundant unlabeled data and well-established cross-modality data for annotation-efficient medical image segmentation. To this end, we propose a novel semi-supervised domain adaptation approach, namely Dual-Teacher, where the student model not only learns from labeled target data (e.g., CT), but also explores unlabeled target data and labeled source data (e.g., MR) by two teacher models. Specifically, the student model learns the knowledge of unlabeled target data from intra-domain teacher by encouraging prediction consistency, as well as the shape priors embedded in labeled source data from inter-domain teacher via knowledge distillation. Consequently, the student model can effectively exploit the information from all three data resources and comprehensively integrate them to achieve improved performance. We conduct extensive experiments on MM-WHS 2017 dataset and demonstrate that our approach is able to concurrently utilize unlabeled data and cross-modality data with superior performance, outperforming semi-supervised learning and domain adaptation methods with a large margin.
Persistent Identifierhttp://hdl.handle.net/10722/299473
ISBN
ISSN
2023 SCImago Journal Rankings: 0.606
Series/Report no.Lecture Notes in Computer Science ; 12261

 

DC FieldValueLanguage
dc.contributor.authorLi, Kang-
dc.contributor.authorWang, Shujun-
dc.contributor.authorYu, Lequan-
dc.contributor.authorHeng, Pheng Ann-
dc.date.accessioned2021-05-21T03:34:29Z-
dc.date.available2021-05-21T03:34:29Z-
dc.date.issued2020-
dc.identifier.citation23rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2020), Lima, Peru, 4-8 October 2020. In Martel, AL, Abolmaesumi, P, Stoyanov, D, et al. (Eds.), Medical Image Computing and Computer Assisted Intervention – MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part I, p. 418-427. Cham, Switzerland: Springer, 2020-
dc.identifier.isbn9783030597092-
dc.identifier.issn0302-9743-
dc.identifier.urihttp://hdl.handle.net/10722/299473-
dc.description.abstractMedical image annotations are prohibitively time-consuming and expensive to obtain. To alleviate annotation scarcity, many approaches have been developed to efficiently utilize extra information, e.g., semi-supervised learning further exploring plentiful unlabeled data, domain adaptation including multi-modality learning and unsupervised domain adaptation resorting to the prior knowledge from additional modality. In this paper, we aim to investigate the feasibility of simultaneously leveraging abundant unlabeled data and well-established cross-modality data for annotation-efficient medical image segmentation. To this end, we propose a novel semi-supervised domain adaptation approach, namely Dual-Teacher, where the student model not only learns from labeled target data (e.g., CT), but also explores unlabeled target data and labeled source data (e.g., MR) by two teacher models. Specifically, the student model learns the knowledge of unlabeled target data from intra-domain teacher by encouraging prediction consistency, as well as the shape priors embedded in labeled source data from inter-domain teacher via knowledge distillation. Consequently, the student model can effectively exploit the information from all three data resources and comprehensively integrate them to achieve improved performance. We conduct extensive experiments on MM-WHS 2017 dataset and demonstrate that our approach is able to concurrently utilize unlabeled data and cross-modality data with superior performance, outperforming semi-supervised learning and domain adaptation methods with a large margin.-
dc.languageeng-
dc.publisherSpringer.-
dc.relation.ispartofMedical Image Computing and Computer Assisted Intervention – MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part I-
dc.relation.ispartofseriesLecture Notes in Computer Science ; 12261-
dc.subjectCardiac segmentation-
dc.subjectCross-modality segmentation-
dc.subjectSemi-supervised domain adaptation-
dc.titleDual-teacher: Integrating intra-domain and inter-domain teachers for annotation-efficient cardiac segmentation-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/978-3-030-59710-8_41-
dc.identifier.scopuseid_2-s2.0-85093068342-
dc.identifier.spage418-
dc.identifier.epage427-
dc.identifier.eissn1611-3349-
dc.publisher.placeCham, Switzerland-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats