File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Self-Supervised Feature Learning via Exploiting Multi-Modal Data for Retinal Disease Diagnosis

TitleSelf-Supervised Feature Learning via Exploiting Multi-Modal Data for Retinal Disease Diagnosis
Authors
Keywordsself-supervised learning
multi-modal data
Retinal disease diagnosis
Issue Date2020
Citation
IEEE Transactions on Medical Imaging, 2020, v. 39, n. 12, p. 4023-4033 How to Cite?
AbstractThe automatic diagnosis of various retinal diseases from fundus images is important to support clinical decision-making. However, developing such automatic solutions is challenging due to the requirement of a large amount of human-annotated data. Recently, unsupervised/self-supervised feature learning techniques receive a lot of attention, as they do not need massive annotations. Most of the current self-supervised methods are analyzed with single imaging modality and there is no method currently utilize multi-modal images for better results. Considering that the diagnostics of various vitreoretinal diseases can greatly benefit from another imaging modality, e.g., FFA, this paper presents a novel self-supervised feature learning method by effectively exploiting multi-modal data for retinal disease diagnosis. To achieve this, we first synthesize the corresponding FFA modality and then formulate a patient feature-based softmax embedding objective. Our objective learns both modality-invariant features and patient-similarity features. Through this mechanism, the neural network captures the semantically shared information across different modalities and the apparent visual similarity between patients. We evaluate our method on two public benchmark datasets for retinal disease diagnosis. The experimental results demonstrate that our method clearly outperforms other self-supervised feature learning methods and is comparable to the supervised baseline. Our code is available at GitHub.
Persistent Identifierhttp://hdl.handle.net/10722/299481
ISSN
2021 Impact Factor: 11.037
2020 SCImago Journal Rankings: 2.322
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLi, Xiaomeng-
dc.contributor.authorJia, Mengyu-
dc.contributor.authorIslam, Md Tauhidul-
dc.contributor.authorYu, Lequan-
dc.contributor.authorXing, Lei-
dc.date.accessioned2021-05-21T03:34:30Z-
dc.date.available2021-05-21T03:34:30Z-
dc.date.issued2020-
dc.identifier.citationIEEE Transactions on Medical Imaging, 2020, v. 39, n. 12, p. 4023-4033-
dc.identifier.issn0278-0062-
dc.identifier.urihttp://hdl.handle.net/10722/299481-
dc.description.abstractThe automatic diagnosis of various retinal diseases from fundus images is important to support clinical decision-making. However, developing such automatic solutions is challenging due to the requirement of a large amount of human-annotated data. Recently, unsupervised/self-supervised feature learning techniques receive a lot of attention, as they do not need massive annotations. Most of the current self-supervised methods are analyzed with single imaging modality and there is no method currently utilize multi-modal images for better results. Considering that the diagnostics of various vitreoretinal diseases can greatly benefit from another imaging modality, e.g., FFA, this paper presents a novel self-supervised feature learning method by effectively exploiting multi-modal data for retinal disease diagnosis. To achieve this, we first synthesize the corresponding FFA modality and then formulate a patient feature-based softmax embedding objective. Our objective learns both modality-invariant features and patient-similarity features. Through this mechanism, the neural network captures the semantically shared information across different modalities and the apparent visual similarity between patients. We evaluate our method on two public benchmark datasets for retinal disease diagnosis. The experimental results demonstrate that our method clearly outperforms other self-supervised feature learning methods and is comparable to the supervised baseline. Our code is available at GitHub.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Medical Imaging-
dc.subjectself-supervised learning-
dc.subjectmulti-modal data-
dc.subjectRetinal disease diagnosis-
dc.titleSelf-Supervised Feature Learning via Exploiting Multi-Modal Data for Retinal Disease Diagnosis-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TMI.2020.3008871-
dc.identifier.pmid32746140-
dc.identifier.scopuseid_2-s2.0-85097004086-
dc.identifier.volume39-
dc.identifier.issue12-
dc.identifier.spage4023-
dc.identifier.epage4033-
dc.identifier.eissn1558-254X-
dc.identifier.isiWOS:000595547500024-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats