File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TPSISA52974.2021.00017
- Scopus: eid_2-s2.0-85128732037
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Perception Poisoning Attacks in Federated Learning
Title | Perception Poisoning Attacks in Federated Learning |
---|---|
Authors | |
Keywords | data poisoning deep neural networks federated learning object detection |
Issue Date | 2021 |
Citation | Proceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021, 2021, p. 146-155 How to Cite? |
Abstract | Federated learning (FL) enables decentralized training of deep neural networks (DNNs) for object detection over a distributed population of clients. It allows edge clients to keep their data local and only share parameter updates with a federated server. However, the distributed nature of FL also opens doors to new threats. In this paper, we present targeted perception poisoning attacks against federated object detection learning in which a subset of malicious clients seeks to poison the federated training of a global object detection model by sharing perception-poisoned local model parameters. We first introduce three targeted perception poisoning attacks, which have severe adverse effects only on the objects under attack. We then analyze the attack feasibility, the impact of malicious client availability, and attack timing. To safeguard FL systems against such contagious threats, we introduce spatial signature analysis as a defense to separate benign local model parameters from poisoned local model contributions, identify malicious clients, and eliminate their impact on the federated training. Extensive experiments on object detection benchmark datasets validate that the defense-empowered federated object detection learning can improve the robustness against all three types of perception poisoning attacks. The source code is available at https://github.com/git-disl/Perception-Poisoning. |
Persistent Identifier | http://hdl.handle.net/10722/343368 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chow, Ka Ho | - |
dc.contributor.author | Liu, Ling | - |
dc.date.accessioned | 2024-05-10T09:07:32Z | - |
dc.date.available | 2024-05-10T09:07:32Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Proceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021, 2021, p. 146-155 | - |
dc.identifier.uri | http://hdl.handle.net/10722/343368 | - |
dc.description.abstract | Federated learning (FL) enables decentralized training of deep neural networks (DNNs) for object detection over a distributed population of clients. It allows edge clients to keep their data local and only share parameter updates with a federated server. However, the distributed nature of FL also opens doors to new threats. In this paper, we present targeted perception poisoning attacks against federated object detection learning in which a subset of malicious clients seeks to poison the federated training of a global object detection model by sharing perception-poisoned local model parameters. We first introduce three targeted perception poisoning attacks, which have severe adverse effects only on the objects under attack. We then analyze the attack feasibility, the impact of malicious client availability, and attack timing. To safeguard FL systems against such contagious threats, we introduce spatial signature analysis as a defense to separate benign local model parameters from poisoned local model contributions, identify malicious clients, and eliminate their impact on the federated training. Extensive experiments on object detection benchmark datasets validate that the defense-empowered federated object detection learning can improve the robustness against all three types of perception poisoning attacks. The source code is available at https://github.com/git-disl/Perception-Poisoning. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021 | - |
dc.subject | data poisoning | - |
dc.subject | deep neural networks | - |
dc.subject | federated learning | - |
dc.subject | object detection | - |
dc.title | Perception Poisoning Attacks in Federated Learning | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TPSISA52974.2021.00017 | - |
dc.identifier.scopus | eid_2-s2.0-85128732037 | - |
dc.identifier.spage | 146 | - |
dc.identifier.epage | 155 | - |