File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Improving Adversarial Transferability via Intermediate-level Perturbation Decay

TitleImproving Adversarial Transferability via Intermediate-level Perturbation Decay
Authors
Issue Date2023
Citation
Advances in Neural Information Processing Systems, 2023, v. 36 How to Cite?
AbstractIntermediate-level attacks that attempt to perturb feature representations following an adversarial direction drastically have shown favorable performance in crafting transferable adversarial examples. Existing methods in this category are normally formulated with two separate stages, where a directional guide is required to be determined at first and the scalar projection of the intermediate-level perturbation onto the directional guide is enlarged thereafter. The obtained perturbation deviates from the guide inevitably in the feature space, and it is revealed in this paper that such a deviation may lead to sub-optimal attack. To address this issue, we develop a novel intermediate-level method that crafts adversarial examples within a single stage of optimization. In particular, the proposed method, named intermediate-level perturbation decay (ILPD), encourages the intermediate-level perturbation to be in an effective adversarial direction and to possess a great magnitude simultaneously. In-depth discussion verifies the effectiveness of our method. Experimental results show that it outperforms state-of-the-arts by large margins in attacking various victim models on ImageNet (+10.07% on average) and CIFAR-10 (+3.88% on average). Our code is at https://github.com/qizhangli/ILPD-attack.
Persistent Identifierhttp://hdl.handle.net/10722/347088
ISSN
2020 SCImago Journal Rankings: 1.399

 

DC FieldValueLanguage
dc.contributor.authorLi, Qizhang-
dc.contributor.authorGuo, Yiwen-
dc.contributor.authorZuo, Wangmeng-
dc.contributor.authorChen, Hao-
dc.date.accessioned2024-09-17T04:15:18Z-
dc.date.available2024-09-17T04:15:18Z-
dc.date.issued2023-
dc.identifier.citationAdvances in Neural Information Processing Systems, 2023, v. 36-
dc.identifier.issn1049-5258-
dc.identifier.urihttp://hdl.handle.net/10722/347088-
dc.description.abstractIntermediate-level attacks that attempt to perturb feature representations following an adversarial direction drastically have shown favorable performance in crafting transferable adversarial examples. Existing methods in this category are normally formulated with two separate stages, where a directional guide is required to be determined at first and the scalar projection of the intermediate-level perturbation onto the directional guide is enlarged thereafter. The obtained perturbation deviates from the guide inevitably in the feature space, and it is revealed in this paper that such a deviation may lead to sub-optimal attack. To address this issue, we develop a novel intermediate-level method that crafts adversarial examples within a single stage of optimization. In particular, the proposed method, named intermediate-level perturbation decay (ILPD), encourages the intermediate-level perturbation to be in an effective adversarial direction and to possess a great magnitude simultaneously. In-depth discussion verifies the effectiveness of our method. Experimental results show that it outperforms state-of-the-arts by large margins in attacking various victim models on ImageNet (+10.07% on average) and CIFAR-10 (+3.88% on average). Our code is at https://github.com/qizhangli/ILPD-attack.-
dc.languageeng-
dc.relation.ispartofAdvances in Neural Information Processing Systems-
dc.titleImproving Adversarial Transferability via Intermediate-level Perturbation Decay-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85180066163-
dc.identifier.volume36-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats