File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TPS-ISA50397.2020.00042
- Scopus: eid_2-s2.0-85100406699
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Adversarial Objectness Gradient Attacks in Real-time Object Detection Systems
Title | Adversarial Objectness Gradient Attacks in Real-time Object Detection Systems |
---|---|
Authors | |
Keywords | adversarial attacks deep neural networks object detection |
Issue Date | 2020 |
Citation | Proceedings - 2020 2nd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2020, 2020, p. 263-272 How to Cite? |
Abstract | Real-time object detection is one of the key applications of deep neural networks (DNNs) for real-world mission-critical systems. While DNN-powered object detection systems celebrate many life-enriching opportunities, they also open doors for misuse and abuse. This paper presents a suite of adversarial objectness gradient attacks, coined as TOG, which can cause the state-of-the-art deep object detection networks to suffer from untargeted random attacks or even targeted attacks with three types of specificity: (1) object-vanishing, (2) object-fabrication, and (3) object-mislabeling. Apart from tailoring an adversarial perturbation for each input image, we further demonstrate TOG as a universal attack, which trains a single adversarial perturbation that can be generalized to effectively craft an unseen input with a negligible attack time cost. Also, we apply TOG as an adversarial patch attack, a form of physical attacks, showing its ability to optimize a visually confined patch filled with malicious patterns, deceiving well-trained object detectors to misbehave purposefully. We report our experimental measurements using three benchmark datasets (PASCAL VOC, MS COCO, and INRIA) on models from three dominant detection algorithms (YOLOv3, SSD, and Faster R-CNN). The results demonstrate serious adversarial vulnerabilities and the compelling need for developing robust object detection systems. The source code is available at https://github.com/git-disl/TOG. |
Persistent Identifier | http://hdl.handle.net/10722/343330 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chow, Ka Ho | - |
dc.contributor.author | Liu, Ling | - |
dc.contributor.author | Loper, Margaret | - |
dc.contributor.author | Bae, Juhyun | - |
dc.contributor.author | Gursoy, Mehmet Emre | - |
dc.contributor.author | Truex, Stacey | - |
dc.contributor.author | Wei, Wenqi | - |
dc.contributor.author | Wu, Yanzhao | - |
dc.date.accessioned | 2024-05-10T09:07:15Z | - |
dc.date.available | 2024-05-10T09:07:15Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | Proceedings - 2020 2nd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2020, 2020, p. 263-272 | - |
dc.identifier.uri | http://hdl.handle.net/10722/343330 | - |
dc.description.abstract | Real-time object detection is one of the key applications of deep neural networks (DNNs) for real-world mission-critical systems. While DNN-powered object detection systems celebrate many life-enriching opportunities, they also open doors for misuse and abuse. This paper presents a suite of adversarial objectness gradient attacks, coined as TOG, which can cause the state-of-the-art deep object detection networks to suffer from untargeted random attacks or even targeted attacks with three types of specificity: (1) object-vanishing, (2) object-fabrication, and (3) object-mislabeling. Apart from tailoring an adversarial perturbation for each input image, we further demonstrate TOG as a universal attack, which trains a single adversarial perturbation that can be generalized to effectively craft an unseen input with a negligible attack time cost. Also, we apply TOG as an adversarial patch attack, a form of physical attacks, showing its ability to optimize a visually confined patch filled with malicious patterns, deceiving well-trained object detectors to misbehave purposefully. We report our experimental measurements using three benchmark datasets (PASCAL VOC, MS COCO, and INRIA) on models from three dominant detection algorithms (YOLOv3, SSD, and Faster R-CNN). The results demonstrate serious adversarial vulnerabilities and the compelling need for developing robust object detection systems. The source code is available at https://github.com/git-disl/TOG. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings - 2020 2nd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2020 | - |
dc.subject | adversarial attacks | - |
dc.subject | deep neural networks | - |
dc.subject | object detection | - |
dc.title | Adversarial Objectness Gradient Attacks in Real-time Object Detection Systems | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TPS-ISA50397.2020.00042 | - |
dc.identifier.scopus | eid_2-s2.0-85100406699 | - |
dc.identifier.spage | 263 | - |
dc.identifier.epage | 272 | - |