File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Neuron Sensitivity-Guided Test Case Selection

TitleNeuron Sensitivity-Guided Test Case Selection
Authors
KeywordsDeep learning testing
model interpretation
neuron sensitivity
Issue Date27-Sep-2024
PublisherAssociation for Computing Machinery (ACM)
Citation
ACM Transactions on Software Engineering and Methodology, 2024, v. 33, n. 7 How to Cite?
Abstract

Deep neural networks (DNNs) have been widely deployed in software to address various tasks (e.g., autonomous driving, medical diagnosis). However, they can also produce incorrect behaviors that result in financial losses and even threaten human safety. To reveal and repair incorrect behaviors in DNNs, developers often collect rich, unlabeled datasets from the natural world and label them to test DNN models. However, properly labeling a large number of datasets is a highly expensive and time-consuming task.

To address the above-mentioned problem, we propose neuron sensitivity-guided test case selection (NSS), which can reduce the labeling time by selecting valuable test cases from unlabeled datasets. NSS leverages the information of the internal neuron induced by the test cases to select valuable test cases, which have high confidence in causing the model to behave incorrectly. We evaluated NSS with four widely used datasets and four well-designed DNN models compared to the state-of-the-art (SOTA) baseline methods. The results show that NSS performs well in assessing the probability of failure triggering in test cases and in the improvement capabilities of the model. Specifically, compared to the baseline approaches, NSS achieves a higher fault detection rate (e.g., when selecting 5% of the test cases from the unlabeled dataset in the MNIST and LeNet1 experiment, NSS can obtain an 81.8% fault detection rate, which is a 20% increase compared with SOTA baseline strategies).


Persistent Identifierhttp://hdl.handle.net/10722/350208
ISSN
2023 Impact Factor: 6.6
2023 SCImago Journal Rankings: 1.853

 

DC FieldValueLanguage
dc.contributor.authorHuang, Dong-
dc.contributor.authorBu, Qingwen-
dc.contributor.authorFu, Yichao-
dc.contributor.authorQing, Yuhao-
dc.contributor.authorXie, Xiaofei-
dc.contributor.authorChen, Junjie-
dc.contributor.authorCui, Heming-
dc.date.accessioned2024-10-21T03:56:52Z-
dc.date.available2024-10-21T03:56:52Z-
dc.date.issued2024-09-27-
dc.identifier.citationACM Transactions on Software Engineering and Methodology, 2024, v. 33, n. 7-
dc.identifier.issn1049-331X-
dc.identifier.urihttp://hdl.handle.net/10722/350208-
dc.description.abstract<p>Deep neural networks (DNNs) have been widely deployed in software to address various tasks (e.g., autonomous driving, medical diagnosis). However, they can also produce incorrect behaviors that result in financial losses and even threaten human safety. To reveal and repair incorrect behaviors in DNNs, developers often collect rich, unlabeled datasets from the natural world and label them to test DNN models. However, properly labeling a large number of datasets is a highly expensive and time-consuming task.</p><p>To address the above-mentioned problem, we propose neuron sensitivity-guided test case selection (NSS), which can reduce the labeling time by selecting valuable test cases from unlabeled datasets. NSS leverages the information of the internal neuron induced by the test cases to select valuable test cases, which have high confidence in causing the model to behave incorrectly. We evaluated NSS with four widely used datasets and four well-designed DNN models compared to the state-of-the-art (SOTA) baseline methods. The results show that NSS performs well in assessing the probability of failure triggering in test cases and in the improvement capabilities of the model. Specifically, compared to the baseline approaches, NSS achieves a higher fault detection rate (e.g., when selecting 5% of the test cases from the unlabeled dataset in the MNIST and LeNet1 experiment, NSS can obtain an 81.8% fault detection rate, which is a 20% increase compared with SOTA baseline strategies).</p>-
dc.languageeng-
dc.publisherAssociation for Computing Machinery (ACM)-
dc.relation.ispartofACM Transactions on Software Engineering and Methodology-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectDeep learning testing-
dc.subjectmodel interpretation-
dc.subjectneuron sensitivity-
dc.titleNeuron Sensitivity-Guided Test Case Selection -
dc.typeArticle-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.1145/3672454-
dc.identifier.scopuseid_2-s2.0-85206217697-
dc.identifier.volume33-
dc.identifier.issue7-
dc.identifier.eissn1557-7392-
dc.identifier.issnl1049-331X-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats