File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Demystifying Data Poisoning Attacks in Distributed Learning as a Service

TitleDemystifying Data Poisoning Attacks in Distributed Learning as a Service
Authors
KeywordsData poisoning
federated learning
security analysis
Issue Date2024
Citation
IEEE Transactions on Services Computing, 2024, v. 17, n. 1, p. 237-250 How to Cite?
AbstractData Poisoning is a dominating threat in the distributed learning-as-a-service API, where the mediator has limited control over the distributed client contributing to the joint model. Through an in-depth characterization of data poisoning risks in federated learning, this paper presents a comprehensive study towards demystifying data poisoning attacks from three perspectives. First, we formally define the targeted dirty-label data poisoning attack, which aims to cause the trained global model to only misclassify the input from a specific victim class with a designated malicious behavior. Then, we demonstrate theoretical statistical robustness in the eigenvalues of the covariance in the gradient update shared from the client to server when under the data poisoning attack. Second, we study the impact of attack timing and identify the most detrimental attack entry point during the federated training. Last, we examine several existing defenses against data poisoning in addition to the robust statistic detection. Through formal analysis and extensive empirical evidence, we investigate under what conditions the statistical robustness of data poisoning can serve as the forensic evidence for attack mitigation in federated-learning-as-a-service, at what attack timing the attack is most detrimental, and how the attack reacts in the presence of the existing defenses.
Persistent Identifierhttp://hdl.handle.net/10722/343446

 

DC FieldValueLanguage
dc.contributor.authorWei, Wenqi-
dc.contributor.authorChow, Ka Ho-
dc.contributor.authorWu, Yanzhao-
dc.contributor.authorLiu, Ling-
dc.date.accessioned2024-05-10T09:08:12Z-
dc.date.available2024-05-10T09:08:12Z-
dc.date.issued2024-
dc.identifier.citationIEEE Transactions on Services Computing, 2024, v. 17, n. 1, p. 237-250-
dc.identifier.urihttp://hdl.handle.net/10722/343446-
dc.description.abstractData Poisoning is a dominating threat in the distributed learning-as-a-service API, where the mediator has limited control over the distributed client contributing to the joint model. Through an in-depth characterization of data poisoning risks in federated learning, this paper presents a comprehensive study towards demystifying data poisoning attacks from three perspectives. First, we formally define the targeted dirty-label data poisoning attack, which aims to cause the trained global model to only misclassify the input from a specific victim class with a designated malicious behavior. Then, we demonstrate theoretical statistical robustness in the eigenvalues of the covariance in the gradient update shared from the client to server when under the data poisoning attack. Second, we study the impact of attack timing and identify the most detrimental attack entry point during the federated training. Last, we examine several existing defenses against data poisoning in addition to the robust statistic detection. Through formal analysis and extensive empirical evidence, we investigate under what conditions the statistical robustness of data poisoning can serve as the forensic evidence for attack mitigation in federated-learning-as-a-service, at what attack timing the attack is most detrimental, and how the attack reacts in the presence of the existing defenses.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Services Computing-
dc.subjectData poisoning-
dc.subjectfederated learning-
dc.subjectsecurity analysis-
dc.titleDemystifying Data Poisoning Attacks in Distributed Learning as a Service-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TSC.2023.3341951-
dc.identifier.scopuseid_2-s2.0-85179823929-
dc.identifier.volume17-
dc.identifier.issue1-
dc.identifier.spage237-
dc.identifier.epage250-
dc.identifier.eissn1939-1374-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats