File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Label-Efficient Self-Supervised Federated Learning for Tackling Data Heterogeneity in Medical Imaging

TitleLabel-Efficient Self-Supervised Federated Learning for Tackling Data Heterogeneity in Medical Imaging
Authors
KeywordsBiomedical imaging
Data Efficiency
Data models
Distributed databases
Federated Learning
Self-supervised Learning
Self-supervised learning
Task analysis
Training
Transformers
Vision Transformers
Issue Date2022
Citation
IEEE Transactions on Medical Imaging, 2022 How to Cite?
AbstractThe collection and curation of large-scale medical datasets from multiple institutions is essential for training accurate deep learning models, but privacy concerns often hinder data sharing. Federated learning (FL) is a promising solution that enables privacy-preserving collaborative learning among different institutions, but it generally suffers from performance deterioration due to heterogeneous data distributions and a lack of quality labeled data. In this paper, we present a robust and label-efficient self-supervised FL framework for medical image analysis. Our method introduces a novel Transformer-based self-supervised pre-training paradigm that pre-trains models directly on decentralized target task datasets using masked image modeling, to facilitate more robust representation learning on heterogeneous data and effective knowledge transfer to downstream models. Extensive empirical results on simulated and real-world medical imaging non-IID federated datasets show that masked image modeling with Transformers significantly improves the robustness of models against various degrees of data heterogeneity. Notably, under severe data heterogeneity, our method, without relying on any additional pre-training data, achieves an improvement of 5.06%, 1.53% and 4.58% in test accuracy on retinal, dermatology and chest X-ray classification compared to the supervised baseline with ImageNet pre-training. In addition, we show that our federated self-supervised pre-training methods yield models that generalize better to out-of-distribution data and perform more effectively when fine-tuning with limited labeled data, compared to existing FL algorithms. The code is available at https://github.com/rui-yan/SSL-FL.
Persistent Identifierhttp://hdl.handle.net/10722/325597
ISSN
2023 Impact Factor: 8.9
2023 SCImago Journal Rankings: 3.703
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorYan, Rui-
dc.contributor.authorQu, Liangqiong-
dc.contributor.authorWei, Qingyue-
dc.contributor.authorHuang, Shih Cheng-
dc.contributor.authorShen, Liyue-
dc.contributor.authorRubin, Daniel-
dc.contributor.authorXing, Lei-
dc.contributor.authorZhou, Yuyin-
dc.date.accessioned2023-02-27T07:34:39Z-
dc.date.available2023-02-27T07:34:39Z-
dc.date.issued2022-
dc.identifier.citationIEEE Transactions on Medical Imaging, 2022-
dc.identifier.issn0278-0062-
dc.identifier.urihttp://hdl.handle.net/10722/325597-
dc.description.abstractThe collection and curation of large-scale medical datasets from multiple institutions is essential for training accurate deep learning models, but privacy concerns often hinder data sharing. Federated learning (FL) is a promising solution that enables privacy-preserving collaborative learning among different institutions, but it generally suffers from performance deterioration due to heterogeneous data distributions and a lack of quality labeled data. In this paper, we present a robust and label-efficient self-supervised FL framework for medical image analysis. Our method introduces a novel Transformer-based self-supervised pre-training paradigm that pre-trains models directly on decentralized target task datasets using masked image modeling, to facilitate more robust representation learning on heterogeneous data and effective knowledge transfer to downstream models. Extensive empirical results on simulated and real-world medical imaging non-IID federated datasets show that masked image modeling with Transformers significantly improves the robustness of models against various degrees of data heterogeneity. Notably, under severe data heterogeneity, our method, without relying on any additional pre-training data, achieves an improvement of 5.06%, 1.53% and 4.58% in test accuracy on retinal, dermatology and chest X-ray classification compared to the supervised baseline with ImageNet pre-training. In addition, we show that our federated self-supervised pre-training methods yield models that generalize better to out-of-distribution data and perform more effectively when fine-tuning with limited labeled data, compared to existing FL algorithms. The code is available at https://github.com/rui-yan/SSL-FL.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Medical Imaging-
dc.subjectBiomedical imaging-
dc.subjectData Efficiency-
dc.subjectData models-
dc.subjectDistributed databases-
dc.subjectFederated Learning-
dc.subjectSelf-supervised Learning-
dc.subjectSelf-supervised learning-
dc.subjectTask analysis-
dc.subjectTraining-
dc.subjectTransformers-
dc.subjectVision Transformers-
dc.titleLabel-Efficient Self-Supervised Federated Learning for Tackling Data Heterogeneity in Medical Imaging-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TMI.2022.3233574-
dc.identifier.scopuseid_2-s2.0-85147205950-
dc.identifier.eissn1558-254X-
dc.identifier.isiWOS:001022138900003-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats