File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TCYB.2019.2914099
- Scopus: eid_2-s2.0-85094931687
- PMID: 31107676
- WOS: WOS:000583709500023
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: ROSA: Robust Salient Object Detection Against Adversarial Attacks
Title | ROSA: Robust Salient Object Detection Against Adversarial Attacks |
---|---|
Authors | |
Keywords | Adversarial attack deep neural network salient object detection |
Issue Date | 2020 |
Publisher | Institute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6221036 |
Citation | IEEE Transactions on Cybernetics, 2020, v. 50 n. 11, p. 4835-4847 How to Cite? |
Abstract | Recently, salient object detection has witnessed remarkable improvement owing to the deep convolutional neural networks which can harvest powerful features for images. In particular, the state-of-the-art salient object detection methods enjoy high accuracy and efficiency from fully convolutional network (FCN)-based frameworks which are trained from end to end and predict pixel-wise labels. However, such framework suffers from adversarial attacks which confuse neural networks via adding quasi-imperceptible noises to input images without changing the ground truth annotated by human subjects. To our knowledge, this paper is the first one that mounts successful adversarial attacks on salient object detection models and verifies that adversarial samples are effective on a wide range of existing methods. Furthermore, this paper proposes a novel end-to-end trainable framework to enhance the robustness for arbitrary FCN-based salient object detection models against adversarial attacks. The proposed framework adopts a novel idea that first introduces some new generic noise to destroy adversarial perturbations, and then learns to predict saliency maps for input images with the introduced noise. Specifically, our proposed method consists of a segment-wise shielding component, which preserves boundaries and destroys delicate adversarial noise patterns and a context-aware restoration component, which refines saliency maps through global contrast modeling. The experimental results suggest that our proposed framework improves the performance significantly for state-of-the-art models on a series of datasets. |
Persistent Identifier | http://hdl.handle.net/10722/301336 |
ISSN | 2023 Impact Factor: 9.4 2023 SCImago Journal Rankings: 5.641 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | LI, H | - |
dc.contributor.author | Li, G | - |
dc.contributor.author | Yu, Y | - |
dc.date.accessioned | 2021-07-27T08:09:35Z | - |
dc.date.available | 2021-07-27T08:09:35Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | IEEE Transactions on Cybernetics, 2020, v. 50 n. 11, p. 4835-4847 | - |
dc.identifier.issn | 2168-2267 | - |
dc.identifier.uri | http://hdl.handle.net/10722/301336 | - |
dc.description.abstract | Recently, salient object detection has witnessed remarkable improvement owing to the deep convolutional neural networks which can harvest powerful features for images. In particular, the state-of-the-art salient object detection methods enjoy high accuracy and efficiency from fully convolutional network (FCN)-based frameworks which are trained from end to end and predict pixel-wise labels. However, such framework suffers from adversarial attacks which confuse neural networks via adding quasi-imperceptible noises to input images without changing the ground truth annotated by human subjects. To our knowledge, this paper is the first one that mounts successful adversarial attacks on salient object detection models and verifies that adversarial samples are effective on a wide range of existing methods. Furthermore, this paper proposes a novel end-to-end trainable framework to enhance the robustness for arbitrary FCN-based salient object detection models against adversarial attacks. The proposed framework adopts a novel idea that first introduces some new generic noise to destroy adversarial perturbations, and then learns to predict saliency maps for input images with the introduced noise. Specifically, our proposed method consists of a segment-wise shielding component, which preserves boundaries and destroys delicate adversarial noise patterns and a context-aware restoration component, which refines saliency maps through global contrast modeling. The experimental results suggest that our proposed framework improves the performance significantly for state-of-the-art models on a series of datasets. | - |
dc.language | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6221036 | - |
dc.relation.ispartof | IEEE Transactions on Cybernetics | - |
dc.rights | IEEE Transactions on Cybernetics. Copyright © Institute of Electrical and Electronics Engineers. | - |
dc.rights | ©20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | - |
dc.subject | Adversarial attack | - |
dc.subject | deep neural network | - |
dc.subject | salient object detection | - |
dc.title | ROSA: Robust Salient Object Detection Against Adversarial Attacks | - |
dc.type | Article | - |
dc.identifier.email | Yu, Y: yzyu@cs.hku.hk | - |
dc.identifier.authority | Yu, Y=rp01415 | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TCYB.2019.2914099 | - |
dc.identifier.pmid | 31107676 | - |
dc.identifier.scopus | eid_2-s2.0-85094931687 | - |
dc.identifier.hkuros | 323532 | - |
dc.identifier.volume | 50 | - |
dc.identifier.issue | 11 | - |
dc.identifier.spage | 4835 | - |
dc.identifier.epage | 4847 | - |
dc.identifier.isi | WOS:000583709500023 | - |
dc.publisher.place | United States | - |