File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Yet Another Intermediate-Level Attack

TitleYet Another Intermediate-Level Attack
Authors
KeywordsAdversarial examples
Feature maps
Transferability
Issue Date2020
Citation
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2020, v. 12361 LNCS, p. 241-257 How to Cite?
AbstractThe transferability of adversarial examples across deep neural network (DNN) models is the crux of a spectrum of black-box attacks. In this paper, we propose a novel method to enhance the black-box transferability of baseline adversarial examples. By establishing a linear mapping of the intermediate-level discrepancies (between a set of adversarial inputs and their benign counterparts) for predicting the evoked adversarial loss, we aim to take full advantage of the optimization procedure of mulch-step baseline attacks. We conducted extensive experiments to verify the effectiveness of our method on CIFAR-100 and ImageNet. Experimental results demonstrate that it outperforms previous state-of-the-arts considerably. Our code is at https://github.com/qizhangli/ila-plus-plus.
Persistent Identifierhttp://hdl.handle.net/10722/346898
ISSN
2023 SCImago Journal Rankings: 0.606

 

DC FieldValueLanguage
dc.contributor.authorLi, Qizhang-
dc.contributor.authorGuo, Yiwen-
dc.contributor.authorChen, Hao-
dc.date.accessioned2024-09-17T04:14:02Z-
dc.date.available2024-09-17T04:14:02Z-
dc.date.issued2020-
dc.identifier.citationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2020, v. 12361 LNCS, p. 241-257-
dc.identifier.issn0302-9743-
dc.identifier.urihttp://hdl.handle.net/10722/346898-
dc.description.abstractThe transferability of adversarial examples across deep neural network (DNN) models is the crux of a spectrum of black-box attacks. In this paper, we propose a novel method to enhance the black-box transferability of baseline adversarial examples. By establishing a linear mapping of the intermediate-level discrepancies (between a set of adversarial inputs and their benign counterparts) for predicting the evoked adversarial loss, we aim to take full advantage of the optimization procedure of mulch-step baseline attacks. We conducted extensive experiments to verify the effectiveness of our method on CIFAR-100 and ImageNet. Experimental results demonstrate that it outperforms previous state-of-the-arts considerably. Our code is at https://github.com/qizhangli/ila-plus-plus.-
dc.languageeng-
dc.relation.ispartofLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)-
dc.subjectAdversarial examples-
dc.subjectFeature maps-
dc.subjectTransferability-
dc.titleYet Another Intermediate-Level Attack-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/978-3-030-58517-4_15-
dc.identifier.scopuseid_2-s2.0-85092906267-
dc.identifier.volume12361 LNCS-
dc.identifier.spage241-
dc.identifier.epage257-
dc.identifier.eissn1611-3349-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats