File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: An Intermediate-Level Attack Framework on the Basis of Linear Regression

TitleAn Intermediate-Level Attack Framework on the Basis of Linear Regression
Authors
Keywordsadversarial examples
adversarial transferability
Deep neural networks
generalization ability
robustness
Issue Date2023
Citation
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, v. 45, n. 3, p. 2726-2735 How to Cite?
AbstractThis article substantially extends our work published at ECCV (Li et al., 2020), in which an intermediate-level attack was proposed to improve the transferability of some baseline adversarial examples. Specifically, we advocate a framework in which a direct linear mapping from the intermediate-level discrepancies (between adversarial features and benign features) to prediction loss of the adversarial example is established. By delving deep into the core components of such a framework, we show that a variety of linear regression models can all be considered in order to establish the mapping, the magnitude of the finally obtained intermediate-level adversarial discrepancy is correlated with the transferability, and further boost of the performance can be achieved by performing multiple runs of the baseline attack with random initialization. In addition, by leveraging these findings, we achieve new state-of-the-arts on transfer-based ℓ∞ and ℓ2 attacks. Our code is publicly available at https://github.com/qizhangli/ila-plus-plus-lr.
Persistent Identifierhttp://hdl.handle.net/10722/346924
ISSN
2023 Impact Factor: 20.8
2023 SCImago Journal Rankings: 6.158

 

DC FieldValueLanguage
dc.contributor.authorGuo, Yiwen-
dc.contributor.authorLi, Qizhang-
dc.contributor.authorZuo, Wangmeng-
dc.contributor.authorChen, Hao-
dc.date.accessioned2024-09-17T04:14:13Z-
dc.date.available2024-09-17T04:14:13Z-
dc.date.issued2023-
dc.identifier.citationIEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, v. 45, n. 3, p. 2726-2735-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10722/346924-
dc.description.abstractThis article substantially extends our work published at ECCV (Li et al., 2020), in which an intermediate-level attack was proposed to improve the transferability of some baseline adversarial examples. Specifically, we advocate a framework in which a direct linear mapping from the intermediate-level discrepancies (between adversarial features and benign features) to prediction loss of the adversarial example is established. By delving deep into the core components of such a framework, we show that a variety of linear regression models can all be considered in order to establish the mapping, the magnitude of the finally obtained intermediate-level adversarial discrepancy is correlated with the transferability, and further boost of the performance can be achieved by performing multiple runs of the baseline attack with random initialization. In addition, by leveraging these findings, we achieve new state-of-the-arts on transfer-based ℓ∞ and ℓ2 attacks. Our code is publicly available at https://github.com/qizhangli/ila-plus-plus-lr.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Pattern Analysis and Machine Intelligence-
dc.subjectadversarial examples-
dc.subjectadversarial transferability-
dc.subjectDeep neural networks-
dc.subjectgeneralization ability-
dc.subjectrobustness-
dc.titleAn Intermediate-Level Attack Framework on the Basis of Linear Regression-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TPAMI.2022.3188044-
dc.identifier.pmid35786551-
dc.identifier.scopuseid_2-s2.0-85134231622-
dc.identifier.volume45-
dc.identifier.issue3-
dc.identifier.spage2726-
dc.identifier.epage2735-
dc.identifier.eissn1939-3539-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats