File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: ROSA: Robust Salient Object Detection Against Adversarial Attacks

TitleROSA: Robust Salient Object Detection Against Adversarial Attacks
Authors
KeywordsAdversarial attack
deep neural network
salient object detection
Issue Date2020
PublisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6221036
Citation
IEEE Transactions on Cybernetics, 2020, v. 50 n. 11, p. 4835-4847 How to Cite?
AbstractRecently, salient object detection has witnessed remarkable improvement owing to the deep convolutional neural networks which can harvest powerful features for images. In particular, the state-of-the-art salient object detection methods enjoy high accuracy and efficiency from fully convolutional network (FCN)-based frameworks which are trained from end to end and predict pixel-wise labels. However, such framework suffers from adversarial attacks which confuse neural networks via adding quasi-imperceptible noises to input images without changing the ground truth annotated by human subjects. To our knowledge, this paper is the first one that mounts successful adversarial attacks on salient object detection models and verifies that adversarial samples are effective on a wide range of existing methods. Furthermore, this paper proposes a novel end-to-end trainable framework to enhance the robustness for arbitrary FCN-based salient object detection models against adversarial attacks. The proposed framework adopts a novel idea that first introduces some new generic noise to destroy adversarial perturbations, and then learns to predict saliency maps for input images with the introduced noise. Specifically, our proposed method consists of a segment-wise shielding component, which preserves boundaries and destroys delicate adversarial noise patterns and a context-aware restoration component, which refines saliency maps through global contrast modeling. The experimental results suggest that our proposed framework improves the performance significantly for state-of-the-art models on a series of datasets.
Persistent Identifierhttp://hdl.handle.net/10722/301336
ISSN
2023 Impact Factor: 9.4
2023 SCImago Journal Rankings: 5.641
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLI, H-
dc.contributor.authorLi, G-
dc.contributor.authorYu, Y-
dc.date.accessioned2021-07-27T08:09:35Z-
dc.date.available2021-07-27T08:09:35Z-
dc.date.issued2020-
dc.identifier.citationIEEE Transactions on Cybernetics, 2020, v. 50 n. 11, p. 4835-4847-
dc.identifier.issn2168-2267-
dc.identifier.urihttp://hdl.handle.net/10722/301336-
dc.description.abstractRecently, salient object detection has witnessed remarkable improvement owing to the deep convolutional neural networks which can harvest powerful features for images. In particular, the state-of-the-art salient object detection methods enjoy high accuracy and efficiency from fully convolutional network (FCN)-based frameworks which are trained from end to end and predict pixel-wise labels. However, such framework suffers from adversarial attacks which confuse neural networks via adding quasi-imperceptible noises to input images without changing the ground truth annotated by human subjects. To our knowledge, this paper is the first one that mounts successful adversarial attacks on salient object detection models and verifies that adversarial samples are effective on a wide range of existing methods. Furthermore, this paper proposes a novel end-to-end trainable framework to enhance the robustness for arbitrary FCN-based salient object detection models against adversarial attacks. The proposed framework adopts a novel idea that first introduces some new generic noise to destroy adversarial perturbations, and then learns to predict saliency maps for input images with the introduced noise. Specifically, our proposed method consists of a segment-wise shielding component, which preserves boundaries and destroys delicate adversarial noise patterns and a context-aware restoration component, which refines saliency maps through global contrast modeling. The experimental results suggest that our proposed framework improves the performance significantly for state-of-the-art models on a series of datasets.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6221036-
dc.relation.ispartofIEEE Transactions on Cybernetics-
dc.rightsIEEE Transactions on Cybernetics. Copyright © Institute of Electrical and Electronics Engineers.-
dc.rights©20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectAdversarial attack-
dc.subjectdeep neural network-
dc.subjectsalient object detection-
dc.titleROSA: Robust Salient Object Detection Against Adversarial Attacks-
dc.typeArticle-
dc.identifier.emailYu, Y: yzyu@cs.hku.hk-
dc.identifier.authorityYu, Y=rp01415-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TCYB.2019.2914099-
dc.identifier.pmid31107676-
dc.identifier.scopuseid_2-s2.0-85094931687-
dc.identifier.hkuros323532-
dc.identifier.volume50-
dc.identifier.issue11-
dc.identifier.spage4835-
dc.identifier.epage4847-
dc.identifier.isiWOS:000583709500023-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats