File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: MAKING SUBSTITUTE MODELS MORE BAYESIAN CAN ENHANCE TRANSFERABILITY OF ADVERSARIAL EXAMPLES

TitleMAKING SUBSTITUTE MODELS MORE BAYESIAN CAN ENHANCE TRANSFERABILITY OF ADVERSARIAL EXAMPLES
Authors
Issue Date2023
Citation
11th International Conference on Learning Representations, ICLR 2023, 2023 How to Cite?
AbstractThe transferability of adversarial examples across deep neural networks (DNNs) is the crux of many black-box attacks.Many prior efforts have been devoted to improving the transferability via increasing the diversity in inputs of some substitute models.In this paper, by contrast, we opt for the diversity in substitute models and advocate to attack a Bayesian model for achieving desirable transferability.Deriving from the Bayesian formulation, we develop a principled strategy for possible finetuning, which can be combined with many off-the-shelf Gaussian posterior approximations over DNN parameters.Extensive experiments have been conducted to verify the effectiveness of our method, on common benchmark datasets, and the results demonstrate that our method outperforms recent state-of-the-arts by large margins (roughly 19% absolute increase in average attack success rate on ImageNet), and, by combining with these recent methods, further performance gain can be obtained.Our code: https://github.com/qizhangli/MoreBayesian-attack.
Persistent Identifierhttp://hdl.handle.net/10722/347053

 

DC FieldValueLanguage
dc.contributor.authorLi, Qizhang-
dc.contributor.authorGuo, Yiwen-
dc.contributor.authorZuo, Wangmeng-
dc.contributor.authorChen, Hao-
dc.date.accessioned2024-09-17T04:15:00Z-
dc.date.available2024-09-17T04:15:00Z-
dc.date.issued2023-
dc.identifier.citation11th International Conference on Learning Representations, ICLR 2023, 2023-
dc.identifier.urihttp://hdl.handle.net/10722/347053-
dc.description.abstractThe transferability of adversarial examples across deep neural networks (DNNs) is the crux of many black-box attacks.Many prior efforts have been devoted to improving the transferability via increasing the diversity in inputs of some substitute models.In this paper, by contrast, we opt for the diversity in substitute models and advocate to attack a Bayesian model for achieving desirable transferability.Deriving from the Bayesian formulation, we develop a principled strategy for possible finetuning, which can be combined with many off-the-shelf Gaussian posterior approximations over DNN parameters.Extensive experiments have been conducted to verify the effectiveness of our method, on common benchmark datasets, and the results demonstrate that our method outperforms recent state-of-the-arts by large margins (roughly 19% absolute increase in average attack success rate on ImageNet), and, by combining with these recent methods, further performance gain can be obtained.Our code: https://github.com/qizhangli/MoreBayesian-attack.-
dc.languageeng-
dc.relation.ispartof11th International Conference on Learning Representations, ICLR 2023-
dc.titleMAKING SUBSTITUTE MODELS MORE BAYESIAN CAN ENHANCE TRANSFERABILITY OF ADVERSARIAL EXAMPLES-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85163347277-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats