File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/s10994-021-05951-6
- Scopus: eid_2-s2.0-85104495258
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Protect privacy of deep classification networks by exploiting their generative power
Title | Protect privacy of deep classification networks by exploiting their generative power |
---|---|
Authors | |
Keywords | Data privacy Deep neural networks Generative modeling Membership inference attack |
Issue Date | 2021 |
Citation | Machine Learning, 2021, v. 110, n. 4, p. 651-674 How to Cite? |
Abstract | Research showed that deep learning models are vulnerable to membership inference attacks, which aim to determine if an example is in the training set of the model. We propose a new framework to defend against this sort of attack. Our key insight is that if we retrain the original classifier with a new dataset that is independent of the original training set while their elements are sampled from the same distribution, the retrained classifier will leak no information that cannot be inferred from the distribution about the original training set. Our framework consists of three phases. First, we transferred the original classifier to a Joint Energy-based Model (JEM) to exploit the model’s implicit generative power. Then, we sampled from the JEM to create a new dataset. Finally, we used the new dataset to retrain or fine-tune the original classifier. We empirically studied different transfer learning schemes for the JEM and fine-tuning/retraining strategies for the classifier against shadow-model attacks. Our evaluation shows that our framework can suppress the attacker’s membership advantage to a negligible level while keeping the classifier’s accuracy acceptable. We compared it with other state-of-the-art defenses considering adaptive attackers and showed our defense is effective even under the worst-case scenario. Besides, we also found that combining other defenses with our framework often achieves better robustness. Our code will be made available at https://github.com/ChenJiyu/meminf-defense.git. |
Persistent Identifier | http://hdl.handle.net/10722/346999 |
ISSN | 2023 Impact Factor: 4.3 2023 SCImago Journal Rankings: 1.720 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chen, Jiyu | - |
dc.contributor.author | Guo, Yiwen | - |
dc.contributor.author | Zheng, Qianjun | - |
dc.contributor.author | Chen, Hao | - |
dc.date.accessioned | 2024-09-17T04:14:40Z | - |
dc.date.available | 2024-09-17T04:14:40Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Machine Learning, 2021, v. 110, n. 4, p. 651-674 | - |
dc.identifier.issn | 0885-6125 | - |
dc.identifier.uri | http://hdl.handle.net/10722/346999 | - |
dc.description.abstract | Research showed that deep learning models are vulnerable to membership inference attacks, which aim to determine if an example is in the training set of the model. We propose a new framework to defend against this sort of attack. Our key insight is that if we retrain the original classifier with a new dataset that is independent of the original training set while their elements are sampled from the same distribution, the retrained classifier will leak no information that cannot be inferred from the distribution about the original training set. Our framework consists of three phases. First, we transferred the original classifier to a Joint Energy-based Model (JEM) to exploit the model’s implicit generative power. Then, we sampled from the JEM to create a new dataset. Finally, we used the new dataset to retrain or fine-tune the original classifier. We empirically studied different transfer learning schemes for the JEM and fine-tuning/retraining strategies for the classifier against shadow-model attacks. Our evaluation shows that our framework can suppress the attacker’s membership advantage to a negligible level while keeping the classifier’s accuracy acceptable. We compared it with other state-of-the-art defenses considering adaptive attackers and showed our defense is effective even under the worst-case scenario. Besides, we also found that combining other defenses with our framework often achieves better robustness. Our code will be made available at https://github.com/ChenJiyu/meminf-defense.git. | - |
dc.language | eng | - |
dc.relation.ispartof | Machine Learning | - |
dc.subject | Data privacy | - |
dc.subject | Deep neural networks | - |
dc.subject | Generative modeling | - |
dc.subject | Membership inference attack | - |
dc.title | Protect privacy of deep classification networks by exploiting their generative power | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1007/s10994-021-05951-6 | - |
dc.identifier.scopus | eid_2-s2.0-85104495258 | - |
dc.identifier.volume | 110 | - |
dc.identifier.issue | 4 | - |
dc.identifier.spage | 651 | - |
dc.identifier.epage | 674 | - |
dc.identifier.eissn | 1573-0565 | - |