File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Imbalanced classification: A paradigm-based review

TitleImbalanced classification: A paradigm-based review
Authors
Keywordsbinary classification
classical classification (CC) paradigm
cost-sensitive (CS) learning paradigm
imbalance ratio
imbalanced data
Neyman–Pearson (NP) paradigm
resampling methods
Issue Date2021
Citation
Statistical Analysis and Data Mining, 2021, v. 14, n. 5, p. 383-406 How to Cite?
AbstractA common issue for classification in scientific research and industry is the existence of imbalanced classes. When sample sizes of different classes are imbalanced in training data, naively implementing a classification method often leads to unsatisfactory prediction results on test data. Multiple resampling techniques have been proposed to address the class imbalance issues. Yet, there is no general guidance on when to use each technique. In this article, we provide a paradigm-based review of the common resampling techniques for binary classification under imbalanced class sizes. The paradigms we consider include the classical paradigm that minimizes the overall classification error, the cost-sensitive learning paradigm that minimizes a cost-adjusted weighted type I and type II errors, and the Neyman–Pearson paradigm that minimizes the type II error subject to a type I error constraint. Under each paradigm, we investigate the combination of the resampling techniques and a few state-of-the-art classification methods. For each pair of resampling techniques and classification methods, we use simulation studies and a real dataset on credit card fraud to study the performance under different evaluation metrics. From these extensive numerical experiments, we demonstrate under each classification paradigm, the complex dynamics among resampling techniques, base classification methods, evaluation metrics, and imbalance ratios. We also summarize a few takeaway messages regarding the choices of resampling techniques and base classification methods, which could be helpful for practitioners.
Persistent Identifierhttp://hdl.handle.net/10722/354201
ISSN
2023 Impact Factor: 2.1
2023 SCImago Journal Rankings: 0.625

 

DC FieldValueLanguage
dc.contributor.authorFeng, Yang-
dc.contributor.authorZhou, Min-
dc.contributor.authorTong, Xin-
dc.date.accessioned2025-02-07T08:47:08Z-
dc.date.available2025-02-07T08:47:08Z-
dc.date.issued2021-
dc.identifier.citationStatistical Analysis and Data Mining, 2021, v. 14, n. 5, p. 383-406-
dc.identifier.issn1932-1864-
dc.identifier.urihttp://hdl.handle.net/10722/354201-
dc.description.abstractA common issue for classification in scientific research and industry is the existence of imbalanced classes. When sample sizes of different classes are imbalanced in training data, naively implementing a classification method often leads to unsatisfactory prediction results on test data. Multiple resampling techniques have been proposed to address the class imbalance issues. Yet, there is no general guidance on when to use each technique. In this article, we provide a paradigm-based review of the common resampling techniques for binary classification under imbalanced class sizes. The paradigms we consider include the classical paradigm that minimizes the overall classification error, the cost-sensitive learning paradigm that minimizes a cost-adjusted weighted type I and type II errors, and the Neyman–Pearson paradigm that minimizes the type II error subject to a type I error constraint. Under each paradigm, we investigate the combination of the resampling techniques and a few state-of-the-art classification methods. For each pair of resampling techniques and classification methods, we use simulation studies and a real dataset on credit card fraud to study the performance under different evaluation metrics. From these extensive numerical experiments, we demonstrate under each classification paradigm, the complex dynamics among resampling techniques, base classification methods, evaluation metrics, and imbalance ratios. We also summarize a few takeaway messages regarding the choices of resampling techniques and base classification methods, which could be helpful for practitioners.-
dc.languageeng-
dc.relation.ispartofStatistical Analysis and Data Mining-
dc.subjectbinary classification-
dc.subjectclassical classification (CC) paradigm-
dc.subjectcost-sensitive (CS) learning paradigm-
dc.subjectimbalance ratio-
dc.subjectimbalanced data-
dc.subjectNeyman–Pearson (NP) paradigm-
dc.subjectresampling methods-
dc.titleImbalanced classification: A paradigm-based review-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1002/sam.11538-
dc.identifier.scopuseid_2-s2.0-85111707854-
dc.identifier.volume14-
dc.identifier.issue5-
dc.identifier.spage383-
dc.identifier.epage406-
dc.identifier.eissn1932-1872-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats