File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Protect privacy of deep classification networks by exploiting their generative power

TitleProtect privacy of deep classification networks by exploiting their generative power
Authors
KeywordsData privacy
Deep neural networks
Generative modeling
Membership inference attack
Issue Date2021
Citation
Machine Learning, 2021, v. 110, n. 4, p. 651-674 How to Cite?
AbstractResearch showed that deep learning models are vulnerable to membership inference attacks, which aim to determine if an example is in the training set of the model. We propose a new framework to defend against this sort of attack. Our key insight is that if we retrain the original classifier with a new dataset that is independent of the original training set while their elements are sampled from the same distribution, the retrained classifier will leak no information that cannot be inferred from the distribution about the original training set. Our framework consists of three phases. First, we transferred the original classifier to a Joint Energy-based Model (JEM) to exploit the model’s implicit generative power. Then, we sampled from the JEM to create a new dataset. Finally, we used the new dataset to retrain or fine-tune the original classifier. We empirically studied different transfer learning schemes for the JEM and fine-tuning/retraining strategies for the classifier against shadow-model attacks. Our evaluation shows that our framework can suppress the attacker’s membership advantage to a negligible level while keeping the classifier’s accuracy acceptable. We compared it with other state-of-the-art defenses considering adaptive attackers and showed our defense is effective even under the worst-case scenario. Besides, we also found that combining other defenses with our framework often achieves better robustness. Our code will be made available at https://github.com/ChenJiyu/meminf-defense.git.
Persistent Identifierhttp://hdl.handle.net/10722/346999
ISSN
2023 Impact Factor: 4.3
2023 SCImago Journal Rankings: 1.720

 

DC FieldValueLanguage
dc.contributor.authorChen, Jiyu-
dc.contributor.authorGuo, Yiwen-
dc.contributor.authorZheng, Qianjun-
dc.contributor.authorChen, Hao-
dc.date.accessioned2024-09-17T04:14:40Z-
dc.date.available2024-09-17T04:14:40Z-
dc.date.issued2021-
dc.identifier.citationMachine Learning, 2021, v. 110, n. 4, p. 651-674-
dc.identifier.issn0885-6125-
dc.identifier.urihttp://hdl.handle.net/10722/346999-
dc.description.abstractResearch showed that deep learning models are vulnerable to membership inference attacks, which aim to determine if an example is in the training set of the model. We propose a new framework to defend against this sort of attack. Our key insight is that if we retrain the original classifier with a new dataset that is independent of the original training set while their elements are sampled from the same distribution, the retrained classifier will leak no information that cannot be inferred from the distribution about the original training set. Our framework consists of three phases. First, we transferred the original classifier to a Joint Energy-based Model (JEM) to exploit the model’s implicit generative power. Then, we sampled from the JEM to create a new dataset. Finally, we used the new dataset to retrain or fine-tune the original classifier. We empirically studied different transfer learning schemes for the JEM and fine-tuning/retraining strategies for the classifier against shadow-model attacks. Our evaluation shows that our framework can suppress the attacker’s membership advantage to a negligible level while keeping the classifier’s accuracy acceptable. We compared it with other state-of-the-art defenses considering adaptive attackers and showed our defense is effective even under the worst-case scenario. Besides, we also found that combining other defenses with our framework often achieves better robustness. Our code will be made available at https://github.com/ChenJiyu/meminf-defense.git.-
dc.languageeng-
dc.relation.ispartofMachine Learning-
dc.subjectData privacy-
dc.subjectDeep neural networks-
dc.subjectGenerative modeling-
dc.subjectMembership inference attack-
dc.titleProtect privacy of deep classification networks by exploiting their generative power-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/s10994-021-05951-6-
dc.identifier.scopuseid_2-s2.0-85104495258-
dc.identifier.volume110-
dc.identifier.issue4-
dc.identifier.spage651-
dc.identifier.epage674-
dc.identifier.eissn1573-0565-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats