File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Backpropagating linearly improves transferability of adversarial examples

TitleBackpropagating linearly improves transferability of adversarial examples
Authors
Issue Date2020
Citation
Advances in Neural Information Processing Systems, 2020, v. 2020-December How to Cite?
AbstractThe vulnerability of deep neural networks (DNNs) to adversarial examples has drawn great attention from the community. In this paper, we study the transferability of such examples, which lays the foundation of many black-box attacks on DNNs. We revisit a not so new but definitely noteworthy hypothesis of Goodfellow et al.’s and disclose that the transferability can be enhanced by improving the linearity of DNNs in an appropriate manner. We introduce linear backpropagation (LinBP), a method that performs backpropagation in a more linear fashion using off-the-shelf attacks that exploit gradients. More specifically, it calculates forward as normal but backpropagates loss as if some nonlinear activations are not encountered in the forward pass. Experimental results demonstrate that this simple yet effective method obviously outperforms current state-of-the-arts in crafting transferable adversarial examples on CIFAR-10 and ImageNet, leading to more effective attacks on a variety of DNNs. Code at: https://github.com/qizhangli/linbp-attack.
Persistent Identifierhttp://hdl.handle.net/10722/346993
ISSN
2020 SCImago Journal Rankings: 1.399

 

DC FieldValueLanguage
dc.contributor.authorGuo, Yiwen-
dc.contributor.authorLi, Qizhang-
dc.contributor.authorChen, Hao-
dc.date.accessioned2024-09-17T04:14:38Z-
dc.date.available2024-09-17T04:14:38Z-
dc.date.issued2020-
dc.identifier.citationAdvances in Neural Information Processing Systems, 2020, v. 2020-December-
dc.identifier.issn1049-5258-
dc.identifier.urihttp://hdl.handle.net/10722/346993-
dc.description.abstractThe vulnerability of deep neural networks (DNNs) to adversarial examples has drawn great attention from the community. In this paper, we study the transferability of such examples, which lays the foundation of many black-box attacks on DNNs. We revisit a not so new but definitely noteworthy hypothesis of Goodfellow et al.’s and disclose that the transferability can be enhanced by improving the linearity of DNNs in an appropriate manner. We introduce linear backpropagation (LinBP), a method that performs backpropagation in a more linear fashion using off-the-shelf attacks that exploit gradients. More specifically, it calculates forward as normal but backpropagates loss as if some nonlinear activations are not encountered in the forward pass. Experimental results demonstrate that this simple yet effective method obviously outperforms current state-of-the-arts in crafting transferable adversarial examples on CIFAR-10 and ImageNet, leading to more effective attacks on a variety of DNNs. Code at: https://github.com/qizhangli/linbp-attack.-
dc.languageeng-
dc.relation.ispartofAdvances in Neural Information Processing Systems-
dc.titleBackpropagating linearly improves transferability of adversarial examples-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85102164911-
dc.identifier.volume2020-December-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats