File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Adversarial Objectness Gradient Attacks in Real-time Object Detection Systems

TitleAdversarial Objectness Gradient Attacks in Real-time Object Detection Systems
Authors
Keywordsadversarial attacks
deep neural networks
object detection
Issue Date2020
Citation
Proceedings - 2020 2nd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2020, 2020, p. 263-272 How to Cite?
AbstractReal-time object detection is one of the key applications of deep neural networks (DNNs) for real-world mission-critical systems. While DNN-powered object detection systems celebrate many life-enriching opportunities, they also open doors for misuse and abuse. This paper presents a suite of adversarial objectness gradient attacks, coined as TOG, which can cause the state-of-the-art deep object detection networks to suffer from untargeted random attacks or even targeted attacks with three types of specificity: (1) object-vanishing, (2) object-fabrication, and (3) object-mislabeling. Apart from tailoring an adversarial perturbation for each input image, we further demonstrate TOG as a universal attack, which trains a single adversarial perturbation that can be generalized to effectively craft an unseen input with a negligible attack time cost. Also, we apply TOG as an adversarial patch attack, a form of physical attacks, showing its ability to optimize a visually confined patch filled with malicious patterns, deceiving well-trained object detectors to misbehave purposefully. We report our experimental measurements using three benchmark datasets (PASCAL VOC, MS COCO, and INRIA) on models from three dominant detection algorithms (YOLOv3, SSD, and Faster R-CNN). The results demonstrate serious adversarial vulnerabilities and the compelling need for developing robust object detection systems. The source code is available at https://github.com/git-disl/TOG.
Persistent Identifierhttp://hdl.handle.net/10722/343330

 

DC FieldValueLanguage
dc.contributor.authorChow, Ka Ho-
dc.contributor.authorLiu, Ling-
dc.contributor.authorLoper, Margaret-
dc.contributor.authorBae, Juhyun-
dc.contributor.authorGursoy, Mehmet Emre-
dc.contributor.authorTruex, Stacey-
dc.contributor.authorWei, Wenqi-
dc.contributor.authorWu, Yanzhao-
dc.date.accessioned2024-05-10T09:07:15Z-
dc.date.available2024-05-10T09:07:15Z-
dc.date.issued2020-
dc.identifier.citationProceedings - 2020 2nd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2020, 2020, p. 263-272-
dc.identifier.urihttp://hdl.handle.net/10722/343330-
dc.description.abstractReal-time object detection is one of the key applications of deep neural networks (DNNs) for real-world mission-critical systems. While DNN-powered object detection systems celebrate many life-enriching opportunities, they also open doors for misuse and abuse. This paper presents a suite of adversarial objectness gradient attacks, coined as TOG, which can cause the state-of-the-art deep object detection networks to suffer from untargeted random attacks or even targeted attacks with three types of specificity: (1) object-vanishing, (2) object-fabrication, and (3) object-mislabeling. Apart from tailoring an adversarial perturbation for each input image, we further demonstrate TOG as a universal attack, which trains a single adversarial perturbation that can be generalized to effectively craft an unseen input with a negligible attack time cost. Also, we apply TOG as an adversarial patch attack, a form of physical attacks, showing its ability to optimize a visually confined patch filled with malicious patterns, deceiving well-trained object detectors to misbehave purposefully. We report our experimental measurements using three benchmark datasets (PASCAL VOC, MS COCO, and INRIA) on models from three dominant detection algorithms (YOLOv3, SSD, and Faster R-CNN). The results demonstrate serious adversarial vulnerabilities and the compelling need for developing robust object detection systems. The source code is available at https://github.com/git-disl/TOG.-
dc.languageeng-
dc.relation.ispartofProceedings - 2020 2nd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2020-
dc.subjectadversarial attacks-
dc.subjectdeep neural networks-
dc.subjectobject detection-
dc.titleAdversarial Objectness Gradient Attacks in Real-time Object Detection Systems-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TPS-ISA50397.2020.00042-
dc.identifier.scopuseid_2-s2.0-85100406699-
dc.identifier.spage263-
dc.identifier.epage272-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats