File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Perception Poisoning Attacks in Federated Learning

TitlePerception Poisoning Attacks in Federated Learning
Authors
Keywordsdata poisoning
deep neural networks
federated learning
object detection
Issue Date2021
Citation
Proceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021, 2021, p. 146-155 How to Cite?
AbstractFederated learning (FL) enables decentralized training of deep neural networks (DNNs) for object detection over a distributed population of clients. It allows edge clients to keep their data local and only share parameter updates with a federated server. However, the distributed nature of FL also opens doors to new threats. In this paper, we present targeted perception poisoning attacks against federated object detection learning in which a subset of malicious clients seeks to poison the federated training of a global object detection model by sharing perception-poisoned local model parameters. We first introduce three targeted perception poisoning attacks, which have severe adverse effects only on the objects under attack. We then analyze the attack feasibility, the impact of malicious client availability, and attack timing. To safeguard FL systems against such contagious threats, we introduce spatial signature analysis as a defense to separate benign local model parameters from poisoned local model contributions, identify malicious clients, and eliminate their impact on the federated training. Extensive experiments on object detection benchmark datasets validate that the defense-empowered federated object detection learning can improve the robustness against all three types of perception poisoning attacks. The source code is available at https://github.com/git-disl/Perception-Poisoning.
Persistent Identifierhttp://hdl.handle.net/10722/343368

 

DC FieldValueLanguage
dc.contributor.authorChow, Ka Ho-
dc.contributor.authorLiu, Ling-
dc.date.accessioned2024-05-10T09:07:32Z-
dc.date.available2024-05-10T09:07:32Z-
dc.date.issued2021-
dc.identifier.citationProceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021, 2021, p. 146-155-
dc.identifier.urihttp://hdl.handle.net/10722/343368-
dc.description.abstractFederated learning (FL) enables decentralized training of deep neural networks (DNNs) for object detection over a distributed population of clients. It allows edge clients to keep their data local and only share parameter updates with a federated server. However, the distributed nature of FL also opens doors to new threats. In this paper, we present targeted perception poisoning attacks against federated object detection learning in which a subset of malicious clients seeks to poison the federated training of a global object detection model by sharing perception-poisoned local model parameters. We first introduce three targeted perception poisoning attacks, which have severe adverse effects only on the objects under attack. We then analyze the attack feasibility, the impact of malicious client availability, and attack timing. To safeguard FL systems against such contagious threats, we introduce spatial signature analysis as a defense to separate benign local model parameters from poisoned local model contributions, identify malicious clients, and eliminate their impact on the federated training. Extensive experiments on object detection benchmark datasets validate that the defense-empowered federated object detection learning can improve the robustness against all three types of perception poisoning attacks. The source code is available at https://github.com/git-disl/Perception-Poisoning.-
dc.languageeng-
dc.relation.ispartofProceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021-
dc.subjectdata poisoning-
dc.subjectdeep neural networks-
dc.subjectfederated learning-
dc.subjectobject detection-
dc.titlePerception Poisoning Attacks in Federated Learning-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TPSISA52974.2021.00017-
dc.identifier.scopuseid_2-s2.0-85128732037-
dc.identifier.spage146-
dc.identifier.epage155-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats