File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: PEACEPACT: Prioritizing examples to accelerate perturbation-based adversary generation for DNN classification testing

TitlePEACEPACT: Prioritizing examples to accelerate perturbation-based adversary generation for DNN classification testing
Authors
Keywordsperturbation-based adversary generation
DNN classification testing
class distinguishability
random noise
perturbations
Issue Date2020
PublisherIEEE Computer Society. The Journal's web site is located at https://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=8424928
Citation
Proceedings of the 2020 IEEE International Conference on Software Quality, Reliability, and Security (QRS ’20), Virtual Conference, Macau, China, 11-14 December 2020 How to Cite?
AbstractDeep neural networks (DNNs) have been widely used in classification tasks. Studies have shown that DNNs may be fooled by artificial examples known as adversaries. A common technique for testing the robustness of a classification is to apply perturbations (such as random noise) to existing examples and try many of them iteratively, but it is very tedious and time-consuming. In this paper, we propose a technique to select adversaries more effectively. We study the vulnerability of examples by exploiting their class distinguishability. In this way, we can evaluate the probability of generating adversaries from each example, and prioritize all the examples accordingly. We have conducted an empirical study using a classic DNN model on four common datasets. The results reveal that the vulnerability of examples has a strong relationship with distinguishability. The effectiveness of our technique is demonstrated through 98.90 to 99.68% improvements in the F-measure.
Persistent Identifierhttp://hdl.handle.net/10722/286759
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLi, Z-
dc.contributor.authorZhang, L-
dc.contributor.authorYan, J-
dc.contributor.authorZhang, J-
dc.contributor.authorZhang, Z-
dc.contributor.authorTse, TH-
dc.date.accessioned2020-09-04T13:29:54Z-
dc.date.available2020-09-04T13:29:54Z-
dc.date.issued2020-
dc.identifier.citationProceedings of the 2020 IEEE International Conference on Software Quality, Reliability, and Security (QRS ’20), Virtual Conference, Macau, China, 11-14 December 2020-
dc.identifier.urihttp://hdl.handle.net/10722/286759-
dc.description.abstractDeep neural networks (DNNs) have been widely used in classification tasks. Studies have shown that DNNs may be fooled by artificial examples known as adversaries. A common technique for testing the robustness of a classification is to apply perturbations (such as random noise) to existing examples and try many of them iteratively, but it is very tedious and time-consuming. In this paper, we propose a technique to select adversaries more effectively. We study the vulnerability of examples by exploiting their class distinguishability. In this way, we can evaluate the probability of generating adversaries from each example, and prioritize all the examples accordingly. We have conducted an empirical study using a classic DNN model on four common datasets. The results reveal that the vulnerability of examples has a strong relationship with distinguishability. The effectiveness of our technique is demonstrated through 98.90 to 99.68% improvements in the F-measure.-
dc.languageeng-
dc.publisherIEEE Computer Society. The Journal's web site is located at https://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=8424928-
dc.relation.ispartofIEEE International Conference on Software Quality, Reliability and Security (QRS)-
dc.rightsIEEE International Conference on Software Quality, Reliability and Security (QRS). Copyright © IEEE Computer Society.-
dc.rights©2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectperturbation-based adversary generation-
dc.subjectDNN classification testing-
dc.subjectclass distinguishability-
dc.subjectrandom noise-
dc.subjectperturbations-
dc.titlePEACEPACT: Prioritizing examples to accelerate perturbation-based adversary generation for DNN classification testing-
dc.typeConference_Paper-
dc.identifier.emailTse, TH: thtse@cs.hku.hk-
dc.identifier.authorityTse, TH=rp00546-
dc.description.naturepostprint-
dc.identifier.doi10.1109/QRS51102.2020.00059-
dc.identifier.scopuseid_2-s2.0-85099309929-
dc.identifier.hkuros313909-
dc.identifier.isiWOS:000648778000045-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats