File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Stratified Adversarial Robustness with Rejection

TitleStratified Adversarial Robustness with Rejection
Authors
Issue Date2023
Citation
Proceedings of Machine Learning Research, 2023, v. 202, p. 4462-4484 How to Cite?
AbstractRecently, there is an emerging interest in adversarially training a classifier with a rejection option (also known as a selective classifier) for boosting adversarial robustness. While rejection can incur a cost in many applications, existing studies typically associate zero cost with rejecting perturbed inputs, which can result in the rejection of numerous slightly-perturbed inputs that could be correctly classified. In this work, we study adversarially-robust classification with rejection in the stratified rejection setting, where the rejection cost is modeled by rejection loss functions monotonically non-increasing in the perturbation magnitude. We theoretically analyze the stratified rejection setting and propose a novel defense method - Adversarial Training with Consistent Prediction-based Rejection (CPR) - for building a robust selective classifier. Experiments on image datasets demonstrate that the proposed method significantly outperforms existing methods under strong adaptive attacks. For instance, on CIFAR-10, CPR reduces the total robust loss (for different rejection losses) by at least 7.3% under both seen and unseen attacks.
Persistent Identifierhttp://hdl.handle.net/10722/341427

 

DC FieldValueLanguage
dc.contributor.authorChen, Jiefeng-
dc.contributor.authorRaghuram, Jayaram-
dc.contributor.authorChoi, Jihye-
dc.contributor.authorWu, Xi-
dc.contributor.authorLiang, Yingyu-
dc.contributor.authorJha, Somesh-
dc.date.accessioned2024-03-13T08:42:44Z-
dc.date.available2024-03-13T08:42:44Z-
dc.date.issued2023-
dc.identifier.citationProceedings of Machine Learning Research, 2023, v. 202, p. 4462-4484-
dc.identifier.urihttp://hdl.handle.net/10722/341427-
dc.description.abstractRecently, there is an emerging interest in adversarially training a classifier with a rejection option (also known as a selective classifier) for boosting adversarial robustness. While rejection can incur a cost in many applications, existing studies typically associate zero cost with rejecting perturbed inputs, which can result in the rejection of numerous slightly-perturbed inputs that could be correctly classified. In this work, we study adversarially-robust classification with rejection in the stratified rejection setting, where the rejection cost is modeled by rejection loss functions monotonically non-increasing in the perturbation magnitude. We theoretically analyze the stratified rejection setting and propose a novel defense method - Adversarial Training with Consistent Prediction-based Rejection (CPR) - for building a robust selective classifier. Experiments on image datasets demonstrate that the proposed method significantly outperforms existing methods under strong adaptive attacks. For instance, on CIFAR-10, CPR reduces the total robust loss (for different rejection losses) by at least 7.3% under both seen and unseen attacks.-
dc.languageeng-
dc.relation.ispartofProceedings of Machine Learning Research-
dc.titleStratified Adversarial Robustness with Rejection-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85174419380-
dc.identifier.volume202-
dc.identifier.spage4462-
dc.identifier.epage4484-
dc.identifier.eissn2640-3498-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats