File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TSC.2023.3341951
- Scopus: eid_2-s2.0-85179823929
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Demystifying Data Poisoning Attacks in Distributed Learning as a Service
Title | Demystifying Data Poisoning Attacks in Distributed Learning as a Service |
---|---|
Authors | |
Keywords | Data poisoning federated learning security analysis |
Issue Date | 2024 |
Citation | IEEE Transactions on Services Computing, 2024, v. 17, n. 1, p. 237-250 How to Cite? |
Abstract | Data Poisoning is a dominating threat in the distributed learning-as-a-service API, where the mediator has limited control over the distributed client contributing to the joint model. Through an in-depth characterization of data poisoning risks in federated learning, this paper presents a comprehensive study towards demystifying data poisoning attacks from three perspectives. First, we formally define the targeted dirty-label data poisoning attack, which aims to cause the trained global model to only misclassify the input from a specific victim class with a designated malicious behavior. Then, we demonstrate theoretical statistical robustness in the eigenvalues of the covariance in the gradient update shared from the client to server when under the data poisoning attack. Second, we study the impact of attack timing and identify the most detrimental attack entry point during the federated training. Last, we examine several existing defenses against data poisoning in addition to the robust statistic detection. Through formal analysis and extensive empirical evidence, we investigate under what conditions the statistical robustness of data poisoning can serve as the forensic evidence for attack mitigation in federated-learning-as-a-service, at what attack timing the attack is most detrimental, and how the attack reacts in the presence of the existing defenses. |
Persistent Identifier | http://hdl.handle.net/10722/343446 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wei, Wenqi | - |
dc.contributor.author | Chow, Ka Ho | - |
dc.contributor.author | Wu, Yanzhao | - |
dc.contributor.author | Liu, Ling | - |
dc.date.accessioned | 2024-05-10T09:08:12Z | - |
dc.date.available | 2024-05-10T09:08:12Z | - |
dc.date.issued | 2024 | - |
dc.identifier.citation | IEEE Transactions on Services Computing, 2024, v. 17, n. 1, p. 237-250 | - |
dc.identifier.uri | http://hdl.handle.net/10722/343446 | - |
dc.description.abstract | Data Poisoning is a dominating threat in the distributed learning-as-a-service API, where the mediator has limited control over the distributed client contributing to the joint model. Through an in-depth characterization of data poisoning risks in federated learning, this paper presents a comprehensive study towards demystifying data poisoning attacks from three perspectives. First, we formally define the targeted dirty-label data poisoning attack, which aims to cause the trained global model to only misclassify the input from a specific victim class with a designated malicious behavior. Then, we demonstrate theoretical statistical robustness in the eigenvalues of the covariance in the gradient update shared from the client to server when under the data poisoning attack. Second, we study the impact of attack timing and identify the most detrimental attack entry point during the federated training. Last, we examine several existing defenses against data poisoning in addition to the robust statistic detection. Through formal analysis and extensive empirical evidence, we investigate under what conditions the statistical robustness of data poisoning can serve as the forensic evidence for attack mitigation in federated-learning-as-a-service, at what attack timing the attack is most detrimental, and how the attack reacts in the presence of the existing defenses. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Services Computing | - |
dc.subject | Data poisoning | - |
dc.subject | federated learning | - |
dc.subject | security analysis | - |
dc.title | Demystifying Data Poisoning Attacks in Distributed Learning as a Service | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TSC.2023.3341951 | - |
dc.identifier.scopus | eid_2-s2.0-85179823929 | - |
dc.identifier.volume | 17 | - |
dc.identifier.issue | 1 | - |
dc.identifier.spage | 237 | - |
dc.identifier.epage | 250 | - |
dc.identifier.eissn | 1939-1374 | - |