File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Explore the Transformation Space for Adversarial Images

TitleExplore the Transformation Space for Adversarial Images
Authors
Keywordsadversarial attacks
deep learning security
image transformation
Issue Date2020
Citation
CODASPY 2020 - Proceedings of the 10th ACM Conference on Data and Application Security and Privacy, 2020, p. 109-120 How to Cite?
AbstractDeep learning models are vulnerable to adversarial examples. Most of current adversarial attacks add pixel-wise perturbations restricted to some Lp-norm, and defense models are evaluated also on adversarial examples restricted inside Lp-norm balls. However, we wish to explore adversarial examples exist beyond Lp-norm balls and their implications for attacks and defenses. In this paper, we focus on adversarial images generated by transformations. We start with color transformation and propose two gradient-based attacks. Since Lp-norm is inappropriate for measuring image quality in the transformation space, we use the similarity between transformations and the Structural Similarity Index. Next, we explore a larger transformation space consisting of combinations of color and affine transformations. We evaluate our transformation attacks on three data sets - CIFAR10, SVHN, and ImageNet - and their corresponding models. Finally, we perform retraining defenses to evaluate the strength of our attacks. The results show that transformation attacks are powerful. They find high-quality adversarial images that have higher transferability and misclassification rates than C&W's Lp attacks, especially at high confidence levels. They are also significantly harder to defend against by retraining than C&W's Lp attacks. More importantly, exploring different attack spaces makes it more challenging to train a universally robust model.
Persistent Identifierhttp://hdl.handle.net/10722/346774

 

DC FieldValueLanguage
dc.contributor.authorChen, Jiyu-
dc.contributor.authorWang, David-
dc.contributor.authorChen, Hao-
dc.date.accessioned2024-09-17T04:13:12Z-
dc.date.available2024-09-17T04:13:12Z-
dc.date.issued2020-
dc.identifier.citationCODASPY 2020 - Proceedings of the 10th ACM Conference on Data and Application Security and Privacy, 2020, p. 109-120-
dc.identifier.urihttp://hdl.handle.net/10722/346774-
dc.description.abstractDeep learning models are vulnerable to adversarial examples. Most of current adversarial attacks add pixel-wise perturbations restricted to some Lp-norm, and defense models are evaluated also on adversarial examples restricted inside Lp-norm balls. However, we wish to explore adversarial examples exist beyond Lp-norm balls and their implications for attacks and defenses. In this paper, we focus on adversarial images generated by transformations. We start with color transformation and propose two gradient-based attacks. Since Lp-norm is inappropriate for measuring image quality in the transformation space, we use the similarity between transformations and the Structural Similarity Index. Next, we explore a larger transformation space consisting of combinations of color and affine transformations. We evaluate our transformation attacks on three data sets - CIFAR10, SVHN, and ImageNet - and their corresponding models. Finally, we perform retraining defenses to evaluate the strength of our attacks. The results show that transformation attacks are powerful. They find high-quality adversarial images that have higher transferability and misclassification rates than C&W's Lp attacks, especially at high confidence levels. They are also significantly harder to defend against by retraining than C&W's Lp attacks. More importantly, exploring different attack spaces makes it more challenging to train a universally robust model.-
dc.languageeng-
dc.relation.ispartofCODASPY 2020 - Proceedings of the 10th ACM Conference on Data and Application Security and Privacy-
dc.subjectadversarial attacks-
dc.subjectdeep learning security-
dc.subjectimage transformation-
dc.titleExplore the Transformation Space for Adversarial Images-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3374664.3375728-
dc.identifier.scopuseid_2-s2.0-85083372033-
dc.identifier.spage109-
dc.identifier.epage120-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats