File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/ICDM58522.2023.00182
- Scopus: eid_2-s2.0-85177752010
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Model Cloaking against Gradient Leakage
Title | Model Cloaking against Gradient Leakage |
---|---|
Authors | |
Keywords | Federated learning gradient leakage privacy analysis |
Issue Date | 2023 |
Citation | Proceedings - IEEE International Conference on Data Mining, ICDM, 2023, p. 1403-1408 How to Cite? |
Abstract | Gradient leakage attacks are dominating privacy threats in federated learning, despite the default privacy that training data resides locally at the clients. Differential privacy has been the de facto standard for privacy protection and is deployed in federated learning to mitigate privacy risks. However, much existing literature points out that differential privacy fails to defend against gradient leakage. The paper presents ModelCloak, a principled approach based on differential privacy noise, aiming for safe-sharing client local model updates. The paper is organized into three major components. First, we introduce the gradient leakage robustness trade-off, in search of the best balance between accuracy and leakage prevention. The trade-off relation is developed based on the behavior of gradient leakage attacks throughout the federated training process. Second, we demonstrate that a proper amount of differential privacy noise can offer the best accuracy performance within the privacy requirement under a fixed differential privacy noise setting. Third, we propose dynamic differential privacy noise and show that the privacy-utility trade-off can be further optimized with dynamic model perturbation, ensuring privacy protection, competitive accuracy, and leakage attack prevention simultaneously. |
Persistent Identifier | http://hdl.handle.net/10722/343442 |
ISSN | 2020 SCImago Journal Rankings: 0.545 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wei, Wenqi | - |
dc.contributor.author | Chow, Ka Ho | - |
dc.contributor.author | Ilhan, Fatih | - |
dc.contributor.author | Wu, Yanzhao | - |
dc.contributor.author | Liu, Ling | - |
dc.date.accessioned | 2024-05-10T09:08:10Z | - |
dc.date.available | 2024-05-10T09:08:10Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | Proceedings - IEEE International Conference on Data Mining, ICDM, 2023, p. 1403-1408 | - |
dc.identifier.issn | 1550-4786 | - |
dc.identifier.uri | http://hdl.handle.net/10722/343442 | - |
dc.description.abstract | Gradient leakage attacks are dominating privacy threats in federated learning, despite the default privacy that training data resides locally at the clients. Differential privacy has been the de facto standard for privacy protection and is deployed in federated learning to mitigate privacy risks. However, much existing literature points out that differential privacy fails to defend against gradient leakage. The paper presents ModelCloak, a principled approach based on differential privacy noise, aiming for safe-sharing client local model updates. The paper is organized into three major components. First, we introduce the gradient leakage robustness trade-off, in search of the best balance between accuracy and leakage prevention. The trade-off relation is developed based on the behavior of gradient leakage attacks throughout the federated training process. Second, we demonstrate that a proper amount of differential privacy noise can offer the best accuracy performance within the privacy requirement under a fixed differential privacy noise setting. Third, we propose dynamic differential privacy noise and show that the privacy-utility trade-off can be further optimized with dynamic model perturbation, ensuring privacy protection, competitive accuracy, and leakage attack prevention simultaneously. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings - IEEE International Conference on Data Mining, ICDM | - |
dc.subject | Federated learning | - |
dc.subject | gradient leakage | - |
dc.subject | privacy analysis | - |
dc.title | Model Cloaking against Gradient Leakage | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/ICDM58522.2023.00182 | - |
dc.identifier.scopus | eid_2-s2.0-85177752010 | - |
dc.identifier.spage | 1403 | - |
dc.identifier.epage | 1408 | - |