File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Semi-Supervised Medical Image Classification With Relation-Driven Self-Ensembling Model

TitleSemi-Supervised Medical Image Classification With Relation-Driven Self-Ensembling Model
Authors
Issue Date2020
Citation
IEEE transactions on medical imaging, 2020, v. 39, n. 11, p. 3429-3440 How to Cite?
AbstractTraining deep neural networks usually requires a large amount of labeled data to obtain good performance. However, in medical image analysis, obtaining high-quality labels for the data is laborious and expensive, as accurately annotating medical images demands expertise knowledge of the clinicians. In this paper, we present a novel relation-driven semi-supervised framework for medical image classification. It is a consistency-based method which exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations, and leverages a self-ensembling model to produce high-quality consistency targets for the unlabeled data. Considering that human diagnosis often refers to previous analogous cases to make reliable decisions, we introduce a novel sample relation consistency (SRC) paradigm to effectively exploit unlabeled data by modeling the relationship information among different samples. Superior to existing consistency-based methods which simply enforce consistency of individual predictions, our framework explicitly enforces the consistency of semantic relation among different samples under perturbations, encouraging the model to explore extra semantic information from unlabeled data. We have conducted extensive experiments to evaluate our method on two public benchmark medical image classification datasets, i.e., skin lesion diagnosis with ISIC 2018 challenge and thorax disease classification with ChestX-ray14. Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
Persistent Identifierhttp://hdl.handle.net/10722/299472
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLiu, Quande-
dc.contributor.authorYu, Lequan-
dc.contributor.authorLuo, Luyang-
dc.contributor.authorDou, Qi-
dc.contributor.authorHeng, Pheng Ann-
dc.date.accessioned2021-05-21T03:34:29Z-
dc.date.available2021-05-21T03:34:29Z-
dc.date.issued2020-
dc.identifier.citationIEEE transactions on medical imaging, 2020, v. 39, n. 11, p. 3429-3440-
dc.identifier.urihttp://hdl.handle.net/10722/299472-
dc.description.abstractTraining deep neural networks usually requires a large amount of labeled data to obtain good performance. However, in medical image analysis, obtaining high-quality labels for the data is laborious and expensive, as accurately annotating medical images demands expertise knowledge of the clinicians. In this paper, we present a novel relation-driven semi-supervised framework for medical image classification. It is a consistency-based method which exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations, and leverages a self-ensembling model to produce high-quality consistency targets for the unlabeled data. Considering that human diagnosis often refers to previous analogous cases to make reliable decisions, we introduce a novel sample relation consistency (SRC) paradigm to effectively exploit unlabeled data by modeling the relationship information among different samples. Superior to existing consistency-based methods which simply enforce consistency of individual predictions, our framework explicitly enforces the consistency of semantic relation among different samples under perturbations, encouraging the model to explore extra semantic information from unlabeled data. We have conducted extensive experiments to evaluate our method on two public benchmark medical image classification datasets, i.e., skin lesion diagnosis with ISIC 2018 challenge and thorax disease classification with ChestX-ray14. Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.-
dc.languageeng-
dc.relation.ispartofIEEE transactions on medical imaging-
dc.titleSemi-Supervised Medical Image Classification With Relation-Driven Self-Ensembling Model-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TMI.2020.2995518-
dc.identifier.pmid32746096-
dc.identifier.scopuseid_2-s2.0-85092925209-
dc.identifier.volume39-
dc.identifier.issue11-
dc.identifier.spage3429-
dc.identifier.epage3440-
dc.identifier.eissn1558-254X-
dc.identifier.isiWOS:000586352000016-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats