File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/978-3-031-19818-2_18
- Scopus: eid_2-s2.0-85142768149
- WOS: WOS:000903735000018
- Find via
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness
Title | SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness |
---|---|
Authors | |
Keywords | Adversarial robustness Semantic segmentation |
Issue Date | 23-Oct-2022 |
Publisher | Springer |
Abstract | Deep neural network-based image classifications are vulnerable to adversarial perturbations. The image classifications can be easily fooled by adding artificial small and imperceptible perturbations to input images. As one of the most effective defense strategies, adversarial training was proposed to address the vulnerability of classification models, where the adversarial examples are created and injected into training data during training. The attack and defense of classification models have been intensively studied in past years. Semantic segmentation, as an extension of classifications, has also received great attention recently. Recent work shows a large number of attack iterations are required to create effective adversarial examples to fool segmentation models. The observation makes both robustness evaluation and adversarial training on segmentation models challenging. In this work, we propose an effective and efficient segmentation attack method, dubbed SegPGD. Besides, we provide a convergence analysis to show the proposed SegPGD can create more effective adversarial examples than PGD under the same number of attack iterations. Furthermore, we propose to apply our SegPGD as the underlying attack method for segmentation adversarial training. Since SegPGD can create more effective adversarial examples, the adversarial training with our SegPGD can boost the robustness of segmentation models. Our proposals are also verified with experiments on popular Segmentation model architectures and standard segmentation datasets. |
Persistent Identifier | http://hdl.handle.net/10722/333856 |
ISSN | 2023 SCImago Journal Rankings: 0.606 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Gu, JD | - |
dc.contributor.author | Zhao, HS | - |
dc.contributor.author | Tresp, V | - |
dc.contributor.author | Torr, PHS | - |
dc.date.accessioned | 2023-10-06T08:39:38Z | - |
dc.date.available | 2023-10-06T08:39:38Z | - |
dc.date.issued | 2022-10-23 | - |
dc.identifier.issn | 0302-9743 | - |
dc.identifier.uri | http://hdl.handle.net/10722/333856 | - |
dc.description.abstract | Deep neural network-based image classifications are vulnerable to adversarial perturbations. The image classifications can be easily fooled by adding artificial small and imperceptible perturbations to input images. As one of the most effective defense strategies, adversarial training was proposed to address the vulnerability of classification models, where the adversarial examples are created and injected into training data during training. The attack and defense of classification models have been intensively studied in past years. Semantic segmentation, as an extension of classifications, has also received great attention recently. Recent work shows a large number of attack iterations are required to create effective adversarial examples to fool segmentation models. The observation makes both robustness evaluation and adversarial training on segmentation models challenging. In this work, we propose an effective and efficient segmentation attack method, dubbed SegPGD. Besides, we provide a convergence analysis to show the proposed SegPGD can create more effective adversarial examples than PGD under the same number of attack iterations. Furthermore, we propose to apply our SegPGD as the underlying attack method for segmentation adversarial training. Since SegPGD can create more effective adversarial examples, the adversarial training with our SegPGD can boost the robustness of segmentation models. Our proposals are also verified with experiments on popular Segmentation model architectures and standard segmentation datasets. | - |
dc.language | eng | - |
dc.publisher | Springer | - |
dc.relation.ispartof | 17th European Conference on Computer Vision (ECCV) (23/10/2022, Tel Aviv) | - |
dc.subject | Adversarial robustness | - |
dc.subject | Semantic segmentation | - |
dc.title | SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness | - |
dc.type | Conference_Paper | - |
dc.identifier.doi | 10.1007/978-3-031-19818-2_18 | - |
dc.identifier.scopus | eid_2-s2.0-85142768149 | - |
dc.identifier.volume | 13689 | - |
dc.identifier.spage | 308 | - |
dc.identifier.epage | 325 | - |
dc.identifier.eissn | 1611-3349 | - |
dc.identifier.isi | WOS:000903735000018 | - |
dc.publisher.place | CHAM | - |
dc.identifier.issnl | 0302-9743 | - |