File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Towards Evaluating Transfer-based Attacks Systematically, Practically, and Fairly

TitleTowards Evaluating Transfer-based Attacks Systematically, Practically, and Fairly
Authors
Issue Date2023
Citation
Advances in Neural Information Processing Systems, 2023, v. 36 How to Cite?
AbstractThe adversarial vulnerability of deep neural networks (DNNs) has drawn great attention due to the security risk of applying these models in real-world applications. Based on transferability of adversarial examples, an increasing number of transfer-based methods have been developed to fool black-box DNN models whose architecture and parameters are inaccessible. Although tremendous effort has been exerted, there still lacks a standardized benchmark that could be taken advantage of to compare these methods systematically, fairly, and practically. Our investigation shows that the evaluation of some methods needs to be more reasonable and more thorough to verify their effectiveness, to avoid, for example, unfair comparison and insufficient consideration of possible substitute/victim models. Therefore, we establish a transfer-based attack benchmark (TA-Bench) which implements 30+ methods. In this paper, we evaluate and compare them comprehensively on 25 popular substitute/victim models on ImageNet. New insights about the effectiveness of these methods are gained and guidelines for future evaluations are provided. Code at: https://github.com/qizhangli/TA-Bench.
Persistent Identifierhttp://hdl.handle.net/10722/347089
ISSN
2020 SCImago Journal Rankings: 1.399

 

DC FieldValueLanguage
dc.contributor.authorLi, Qizhang-
dc.contributor.authorGuo, Yiwen-
dc.contributor.authorZuo, Wangmeng-
dc.contributor.authorChen, Hao-
dc.date.accessioned2024-09-17T04:15:18Z-
dc.date.available2024-09-17T04:15:18Z-
dc.date.issued2023-
dc.identifier.citationAdvances in Neural Information Processing Systems, 2023, v. 36-
dc.identifier.issn1049-5258-
dc.identifier.urihttp://hdl.handle.net/10722/347089-
dc.description.abstractThe adversarial vulnerability of deep neural networks (DNNs) has drawn great attention due to the security risk of applying these models in real-world applications. Based on transferability of adversarial examples, an increasing number of transfer-based methods have been developed to fool black-box DNN models whose architecture and parameters are inaccessible. Although tremendous effort has been exerted, there still lacks a standardized benchmark that could be taken advantage of to compare these methods systematically, fairly, and practically. Our investigation shows that the evaluation of some methods needs to be more reasonable and more thorough to verify their effectiveness, to avoid, for example, unfair comparison and insufficient consideration of possible substitute/victim models. Therefore, we establish a transfer-based attack benchmark (TA-Bench) which implements 30+ methods. In this paper, we evaluate and compare them comprehensively on 25 popular substitute/victim models on ImageNet. New insights about the effectiveness of these methods are gained and guidelines for future evaluations are provided. Code at: https://github.com/qizhangli/TA-Bench.-
dc.languageeng-
dc.relation.ispartofAdvances in Neural Information Processing Systems-
dc.titleTowards Evaluating Transfer-based Attacks Systematically, Practically, and Fairly-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85180603611-
dc.identifier.volume36-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats