File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Connecting the average and the non-average: A study of the rates of fault detection in testing WS-BPEL services

TitleConnecting the average and the non-average: A study of the rates of fault detection in testing WS-BPEL services
Authors
KeywordsXML-based artifact
WS-BPEL
Test case prioritization
Average scenario
Adverse scenario
Issue Date2015
PublisherIGI Global. The Journal's web site is located at http://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/IJWSR
Citation
International Journal of Web Services Research, 2015, v. 12 n. 3, p. 1-24 How to Cite?
AbstractMany existing studies measure the effectiveness of test case prioritization techniques using the average performance on a set of test suites. However, in each regression test session, a real-world developer may only afford to apply one prioritization technique to one test suite to test a service once, even if this application results in an adverse scenario such that the actual performance in this test session is far below the average result achievable by the same technique over the same test suite for the same application. It indicates that assessing the average performance of such a technique cannot provide adequate confidence for developers to apply the technique. We ask a couple of questions: To what extent does the effectiveness of prioritization techniques in average scenarios correlate with that in adverse scenarios? Moreover, to what extent may a design factor of this class of techniques affect the effectiveness of prioritization in different types of scenarios? To the best of our knowledge, we report in this paper the first controlled experiment to study these two new research questions through more than 300 million APFD and HMFD data points produced from 19 techniques, eight WS-BPEL benchmarks and 1000 test cases prioritized by each technique 1000 times. A main result reveals a strong and linear correlation between the effectiveness in the average scenarios and that in the adverse scenarios. Another interesting result is that many pairs of levels of the same design factors significantly change their relative strengths of being more effective within the same pairs in handling a wide spectrum of prioritized test suites produced by the same techniques over the same test suite in testing the same benchmarks, and the results obtained from the average scenarios is more similar to that of the more effective end than otherwise. This work provides the first piece of strong evidence for the research community to re-assess how they develop and validate their techniques in the average scenarios and beyond.
Persistent Identifierhttp://hdl.handle.net/10722/220464
ISSN
2023 Impact Factor: 0.8
2023 SCImago Journal Rankings: 0.220
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorJia, C-
dc.contributor.authorMei, L-
dc.contributor.authorChan, WK-
dc.contributor.authorYu, YT-
dc.contributor.authorTse, TH-
dc.date.accessioned2015-10-16T06:43:10Z-
dc.date.available2015-10-16T06:43:10Z-
dc.date.issued2015-
dc.identifier.citationInternational Journal of Web Services Research, 2015, v. 12 n. 3, p. 1-24-
dc.identifier.issn1545-7362-
dc.identifier.urihttp://hdl.handle.net/10722/220464-
dc.description.abstractMany existing studies measure the effectiveness of test case prioritization techniques using the average performance on a set of test suites. However, in each regression test session, a real-world developer may only afford to apply one prioritization technique to one test suite to test a service once, even if this application results in an adverse scenario such that the actual performance in this test session is far below the average result achievable by the same technique over the same test suite for the same application. It indicates that assessing the average performance of such a technique cannot provide adequate confidence for developers to apply the technique. We ask a couple of questions: To what extent does the effectiveness of prioritization techniques in average scenarios correlate with that in adverse scenarios? Moreover, to what extent may a design factor of this class of techniques affect the effectiveness of prioritization in different types of scenarios? To the best of our knowledge, we report in this paper the first controlled experiment to study these two new research questions through more than 300 million APFD and HMFD data points produced from 19 techniques, eight WS-BPEL benchmarks and 1000 test cases prioritized by each technique 1000 times. A main result reveals a strong and linear correlation between the effectiveness in the average scenarios and that in the adverse scenarios. Another interesting result is that many pairs of levels of the same design factors significantly change their relative strengths of being more effective within the same pairs in handling a wide spectrum of prioritized test suites produced by the same techniques over the same test suite in testing the same benchmarks, and the results obtained from the average scenarios is more similar to that of the more effective end than otherwise. This work provides the first piece of strong evidence for the research community to re-assess how they develop and validate their techniques in the average scenarios and beyond.-
dc.languageeng-
dc.publisherIGI Global. The Journal's web site is located at http://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/IJWSR-
dc.relation.ispartofInternational Journal of Web Services Research-
dc.subjectXML-based artifact-
dc.subjectWS-BPEL-
dc.subjectTest case prioritization-
dc.subjectAverage scenario-
dc.subjectAdverse scenario-
dc.titleConnecting the average and the non-average: A study of the rates of fault detection in testing WS-BPEL services-
dc.typeArticle-
dc.identifier.emailTse, TH: thtse@cs.hku.hk-
dc.identifier.authorityTse, TH=rp00546-
dc.identifier.doi10.4018/IJWSR.2015070101-
dc.identifier.scopuseid_2-s2.0-84944454640-
dc.identifier.hkuros255715-
dc.identifier.volume12-
dc.identifier.issue3-
dc.identifier.spage1-
dc.identifier.epage24-
dc.identifier.isiWOS:000358073300001-
dc.publisher.placeUnited States-
dc.identifier.issnl1545-7362-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats