Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/QRS51102.2020.00059
- Scopus: eid_2-s2.0-85099309929
- WOS: WOS:000648778000045
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: PEACEPACT: Prioritizing examples to accelerate perturbation-based adversary generation for DNN classification testing
Title | PEACEPACT: Prioritizing examples to accelerate perturbation-based adversary generation for DNN classification testing |
---|---|
Authors | |
Keywords | perturbation-based adversary generation DNN classification testing class distinguishability random noise perturbations |
Issue Date | 2020 |
Publisher | IEEE Computer Society. The Journal's web site is located at https://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=8424928 |
Citation | Proceedings of the 2020 IEEE International Conference on Software Quality, Reliability, and Security (QRS ’20), Virtual Conference, Macau, China, 11-14 December 2020 How to Cite? |
Abstract | Deep neural networks (DNNs) have been widely used in classification tasks. Studies have shown that DNNs may be fooled by artificial examples known as adversaries. A common technique for testing the robustness of a classification is to apply perturbations (such as random noise) to existing examples and try many of them iteratively, but it is very tedious and time-consuming. In this paper, we propose a technique to select adversaries more effectively. We study the vulnerability of examples by exploiting their class distinguishability. In this way, we can evaluate the probability of generating adversaries from
each example, and prioritize all the examples accordingly. We have conducted an empirical study using a classic DNN model on four common datasets. The results reveal that the vulnerability of examples has a strong relationship with distinguishability. The effectiveness of our technique is demonstrated through 98.90 to 99.68% improvements in the F-measure. |
Persistent Identifier | http://hdl.handle.net/10722/286759 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Li, Z | - |
dc.contributor.author | Zhang, L | - |
dc.contributor.author | Yan, J | - |
dc.contributor.author | Zhang, J | - |
dc.contributor.author | Zhang, Z | - |
dc.contributor.author | Tse, TH | - |
dc.date.accessioned | 2020-09-04T13:29:54Z | - |
dc.date.available | 2020-09-04T13:29:54Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | Proceedings of the 2020 IEEE International Conference on Software Quality, Reliability, and Security (QRS ’20), Virtual Conference, Macau, China, 11-14 December 2020 | - |
dc.identifier.uri | http://hdl.handle.net/10722/286759 | - |
dc.description.abstract | Deep neural networks (DNNs) have been widely used in classification tasks. Studies have shown that DNNs may be fooled by artificial examples known as adversaries. A common technique for testing the robustness of a classification is to apply perturbations (such as random noise) to existing examples and try many of them iteratively, but it is very tedious and time-consuming. In this paper, we propose a technique to select adversaries more effectively. We study the vulnerability of examples by exploiting their class distinguishability. In this way, we can evaluate the probability of generating adversaries from each example, and prioritize all the examples accordingly. We have conducted an empirical study using a classic DNN model on four common datasets. The results reveal that the vulnerability of examples has a strong relationship with distinguishability. The effectiveness of our technique is demonstrated through 98.90 to 99.68% improvements in the F-measure. | - |
dc.language | eng | - |
dc.publisher | IEEE Computer Society. The Journal's web site is located at https://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=8424928 | - |
dc.relation.ispartof | IEEE International Conference on Software Quality, Reliability and Security (QRS) | - |
dc.rights | IEEE International Conference on Software Quality, Reliability and Security (QRS). Copyright © IEEE Computer Society. | - |
dc.rights | ©2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | - |
dc.subject | perturbation-based adversary generation | - |
dc.subject | DNN classification testing | - |
dc.subject | class distinguishability | - |
dc.subject | random noise | - |
dc.subject | perturbations | - |
dc.title | PEACEPACT: Prioritizing examples to accelerate perturbation-based adversary generation for DNN classification testing | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Tse, TH: thtse@cs.hku.hk | - |
dc.identifier.authority | Tse, TH=rp00546 | - |
dc.description.nature | postprint | - |
dc.identifier.doi | 10.1109/QRS51102.2020.00059 | - |
dc.identifier.scopus | eid_2-s2.0-85099309929 | - |
dc.identifier.hkuros | 313909 | - |
dc.identifier.isi | WOS:000648778000045 | - |
dc.publisher.place | United States | - |