File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Privacy preservation for federated learning in health care

TitlePrivacy preservation for federated learning in health care
Authors
Keywordsfederated learning
health care
privacy
review article
security
Issue Date12-Jul-2024
PublisherCell Press
Citation
Patterns, 2024, v. 5, n. 7 How to Cite?
AbstractArtificial intelligence (AI) shows potential to improve health care by leveraging data to build models that can inform clinical workflows. However, access to large quantities of diverse data is needed to develop robust generalizable models. Data sharing across institutions is not always feasible due to legal, security, and privacy concerns. Federated learning (FL) allows for multi-institutional training of AI models, obviating data sharing, albeit with different security and privacy concerns. Specifically, insights exchanged during FL can leak information about institutional data. In addition, FL can introduce issues when there is limited trust among the entities performing the compute. With the growing adoption of FL in health care, it is imperative to elucidate the potential risks. We thus summarize privacy-preserving FL literature in this work with special regard to health care. We draw attention to threats and review mitigation approaches. We anticipate this review to become a health-care researcher's guide to security and privacy in FL.
Persistent Identifierhttp://hdl.handle.net/10722/357493
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorPati, Sarthak-
dc.contributor.authorKumar, Sourav-
dc.contributor.authorVarma, Amokh-
dc.contributor.authorEdwards, Brandon-
dc.contributor.authorLu, Charles-
dc.contributor.authorQu, Liangqiong-
dc.contributor.authorWang, Justin J.-
dc.contributor.authorLakshminarayanan, Anantharaman-
dc.contributor.authorWang, Shih han-
dc.contributor.authorSheller, Micah J.-
dc.contributor.authorChang, Ken-
dc.contributor.authorSingh, Praveer-
dc.contributor.authorRubin, Daniel L.-
dc.contributor.authorKalpathy-Cramer, Jayashree-
dc.contributor.authorBakas, Spyridon-
dc.date.accessioned2025-07-22T03:13:05Z-
dc.date.available2025-07-22T03:13:05Z-
dc.date.issued2024-07-12-
dc.identifier.citationPatterns, 2024, v. 5, n. 7-
dc.identifier.urihttp://hdl.handle.net/10722/357493-
dc.description.abstractArtificial intelligence (AI) shows potential to improve health care by leveraging data to build models that can inform clinical workflows. However, access to large quantities of diverse data is needed to develop robust generalizable models. Data sharing across institutions is not always feasible due to legal, security, and privacy concerns. Federated learning (FL) allows for multi-institutional training of AI models, obviating data sharing, albeit with different security and privacy concerns. Specifically, insights exchanged during FL can leak information about institutional data. In addition, FL can introduce issues when there is limited trust among the entities performing the compute. With the growing adoption of FL in health care, it is imperative to elucidate the potential risks. We thus summarize privacy-preserving FL literature in this work with special regard to health care. We draw attention to threats and review mitigation approaches. We anticipate this review to become a health-care researcher's guide to security and privacy in FL.-
dc.languageeng-
dc.publisherCell Press-
dc.relation.ispartofPatterns-
dc.subjectfederated learning-
dc.subjecthealth care-
dc.subjectprivacy-
dc.subjectreview article-
dc.subjectsecurity-
dc.titlePrivacy preservation for federated learning in health care-
dc.typeArticle-
dc.identifier.doi10.1016/j.patter.2024.100974-
dc.identifier.scopuseid_2-s2.0-85197814165-
dc.identifier.volume5-
dc.identifier.issue7-
dc.identifier.eissn2666-3899-
dc.identifier.isiWOS:001267890400001-
dc.identifier.issnl2666-3899-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats