File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Adversarial Deception in Deep Learning: Analysis and Mitigation

TitleAdversarial Deception in Deep Learning: Analysis and Mitigation
Authors
Keywordsmitigation strategy
targeted and untargeted adversarial attacks
Trust and dependability risks in deep learning
Issue Date2020
Citation
Proceedings - 2020 2nd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2020, 2020, p. 236-245 How to Cite?
AbstractThe burgeoning success of deep learning has raised the security and privacy concerns as more and more tasks are accompanied with sensitive data. Adversarial attacks in deep learning have emerged as one of the dominating security threats to a range of mission-critical deep learning systems and applications. This paper takes a holistic view to characterize the adversarial examples in deep learning by studying their adverse effect and presents an attack-independent countermeasure with three original contributions. First, we provide a general formulation of adversarial examples and elaborate on the basic principle for adversarial attack algorithm design. Then, we evaluate 15 adversarial attacks with a variety of evaluation metrics to study their adverse effects and costs. We further conduct three case studies to analyze the effectiveness of adversarial examples and to demonstrate their divergence across attack instances. We take advantage of the instance-level divergence of adversarial examples and propose strategic input transformation teaming defense. The proposed defense methodology is attack-independent and capable of auto-repairing and auto-verifying the prediction decision made on the adversarial input. We show that the strategic input transformation teaming defense can achieve high defense success rates and are more robust with high attack prevention success rates and low benign false-positive rates, compared to existing representative defense methods.
Persistent Identifierhttp://hdl.handle.net/10722/343329

 

DC FieldValueLanguage
dc.contributor.authorWei, Wenqi-
dc.contributor.authorLiu, Ling-
dc.contributor.authorLoper, Margaret-
dc.contributor.authorChow, Ka Ho-
dc.contributor.authorGursoy, Mehmet Emre-
dc.contributor.authorTruex, Stacey-
dc.contributor.authorWu, Yanzhao-
dc.date.accessioned2024-05-10T09:07:14Z-
dc.date.available2024-05-10T09:07:14Z-
dc.date.issued2020-
dc.identifier.citationProceedings - 2020 2nd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2020, 2020, p. 236-245-
dc.identifier.urihttp://hdl.handle.net/10722/343329-
dc.description.abstractThe burgeoning success of deep learning has raised the security and privacy concerns as more and more tasks are accompanied with sensitive data. Adversarial attacks in deep learning have emerged as one of the dominating security threats to a range of mission-critical deep learning systems and applications. This paper takes a holistic view to characterize the adversarial examples in deep learning by studying their adverse effect and presents an attack-independent countermeasure with three original contributions. First, we provide a general formulation of adversarial examples and elaborate on the basic principle for adversarial attack algorithm design. Then, we evaluate 15 adversarial attacks with a variety of evaluation metrics to study their adverse effects and costs. We further conduct three case studies to analyze the effectiveness of adversarial examples and to demonstrate their divergence across attack instances. We take advantage of the instance-level divergence of adversarial examples and propose strategic input transformation teaming defense. The proposed defense methodology is attack-independent and capable of auto-repairing and auto-verifying the prediction decision made on the adversarial input. We show that the strategic input transformation teaming defense can achieve high defense success rates and are more robust with high attack prevention success rates and low benign false-positive rates, compared to existing representative defense methods.-
dc.languageeng-
dc.relation.ispartofProceedings - 2020 2nd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2020-
dc.subjectmitigation strategy-
dc.subjecttargeted and untargeted adversarial attacks-
dc.subjectTrust and dependability risks in deep learning-
dc.titleAdversarial Deception in Deep Learning: Analysis and Mitigation-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TPS-ISA50397.2020.00039-
dc.identifier.scopuseid_2-s2.0-85100404420-
dc.identifier.spage236-
dc.identifier.epage245-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats