File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Heterogeneous Differential-Private Federated Learning: Trading Privacy for Utility Truthfully

TitleHeterogeneous Differential-Private Federated Learning: Trading Privacy for Utility Truthfully
Authors
KeywordsFederated learning
heterogeneous differential privacy
privacy-utility tradeoff
truthful incentives
Issue Date2023
Citation
IEEE Transactions on Dependable and Secure Computing, 2023, v. 20, n. 6, p. 5113-5129 How to Cite?
AbstractDifferential-private federated learning (DP-FL) has emerged to prevent privacy leakage when disclosing encoded sensitive information in model parameters. However, the existing DP-FL frameworks usually preserve privacy homogeneously across clients, while ignoring the different privacy attitudes and expectations. Meanwhile, DP-FL is hard to guarantee that uncontrollable clients (i.e., stragglers) have truthfully added the expected DP noise. To tackle these challenges, we propose a heterogeneous differential-private federated learning framework, named HDP-FL, which captures the variation of privacy attitudes with truthful incentives. First, we investigate the impact of the HDP noise on the theoretical convergence of FL, showing a tradeoff between privacy loss and learning performance. Then, based on the privacy-utility tradeoff, we design a contract-based incentive mechanism, which encourages clients to truthfully reveal private attitudes and contribute to learning as desired. In particular, clients are classified into different privacy preference types and the optimal privacy-price contracts in the discrete-privacy-type model and continuous-privacy-type model are derived. Our extensive experiments with real datasets demonstrate that HDP-FL can maintain satisfactory learning performance while considering different privacy attitudes, which also validate the truthfulness, individual rationality, and effectiveness of our incentives.
Persistent Identifierhttp://hdl.handle.net/10722/336367
ISSN
2021 Impact Factor: 6.791
2020 SCImago Journal Rankings: 1.274
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLin, Xi-
dc.contributor.authorWu, Jun-
dc.contributor.authorLi, Jianhua-
dc.contributor.authorSang, Chao-
dc.contributor.authorHu, Shiyan-
dc.contributor.authorDeen, M. Jamal-
dc.date.accessioned2024-01-15T08:26:13Z-
dc.date.available2024-01-15T08:26:13Z-
dc.date.issued2023-
dc.identifier.citationIEEE Transactions on Dependable and Secure Computing, 2023, v. 20, n. 6, p. 5113-5129-
dc.identifier.issn1545-5971-
dc.identifier.urihttp://hdl.handle.net/10722/336367-
dc.description.abstractDifferential-private federated learning (DP-FL) has emerged to prevent privacy leakage when disclosing encoded sensitive information in model parameters. However, the existing DP-FL frameworks usually preserve privacy homogeneously across clients, while ignoring the different privacy attitudes and expectations. Meanwhile, DP-FL is hard to guarantee that uncontrollable clients (i.e., stragglers) have truthfully added the expected DP noise. To tackle these challenges, we propose a heterogeneous differential-private federated learning framework, named HDP-FL, which captures the variation of privacy attitudes with truthful incentives. First, we investigate the impact of the HDP noise on the theoretical convergence of FL, showing a tradeoff between privacy loss and learning performance. Then, based on the privacy-utility tradeoff, we design a contract-based incentive mechanism, which encourages clients to truthfully reveal private attitudes and contribute to learning as desired. In particular, clients are classified into different privacy preference types and the optimal privacy-price contracts in the discrete-privacy-type model and continuous-privacy-type model are derived. Our extensive experiments with real datasets demonstrate that HDP-FL can maintain satisfactory learning performance while considering different privacy attitudes, which also validate the truthfulness, individual rationality, and effectiveness of our incentives.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Dependable and Secure Computing-
dc.subjectFederated learning-
dc.subjectheterogeneous differential privacy-
dc.subjectprivacy-utility tradeoff-
dc.subjecttruthful incentives-
dc.titleHeterogeneous Differential-Private Federated Learning: Trading Privacy for Utility Truthfully-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TDSC.2023.3241057-
dc.identifier.scopuseid_2-s2.0-85148452089-
dc.identifier.volume20-
dc.identifier.issue6-
dc.identifier.spage5113-
dc.identifier.epage5129-
dc.identifier.eissn1941-0018-
dc.identifier.isiWOS:001186179100002-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats