File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Neyman-pearson classification: Parametrics and sample size requirement

TitleNeyman-pearson classification: Parametrics and sample size requirement
Authors
KeywordsAdaptive splitting
Asymmetric error
Classification
Linear discriminant analysis (LDA)
Minimum sample size requirement
Neyman-Pearson (NP) paradigm
NP oracle inequalities
NP umbrella algorithm
Issue Date2020
Citation
Journal of Machine Learning Research, 2020, v. 21 How to Cite?
AbstractThe Neyman-Pearson (NP) paradigm in binary classification seeks classifiers that achieve a minimal type II error while enforcing the prioritized type I error controlled under some user-specified level α. This paradigm serves naturally in applications such as severe disease diagnosis and spam detection, where people have clear priorities among the two error types. Recently, Tong et al. (2018) proposed a nonparametric umbrella algorithm that adapts all scoring-type classification methods (e.g., logistic regression, support vector machines, random forest) to respect the given type I error (i.e., conditional probability of classifying a class 0 observation as class 1 under the 0-1 coding) upper bound α with high probability, without specific distributional assumptions on the features and the responses. Universal the umbrella algorithm is, it demands an explicit minimum sample size requirement on class 0, which is often the more scarce class, such as in rare disease diagnosis applications. In this work, we employ the parametric linear discriminant analysis (LDA) model and propose a new parametric thresholding algorithm, which does not need the minimum sample size requirements on class 0 observations and thus is suitable for small sample applications such as rare disease diagnosis. Leveraging both the existing nonparametric and the newly proposed parametric thresholding rules, we propose four LDA-based NP classifiers, for both low- and high-dimensional settings. On the theoretical front, we prove NP oracle inequalities for one proposed classifier, where the rate for excess type II error benefits from the explicit parametric model assumption. Furthermore, as NP classifiers involve a sample splitting step of class 0 observations, we construct a new adaptive sample splitting scheme that can be applied universally to NP classifiers, and this adaptive strategy reduces the type II error of these classifiers. The proposed NP classifiers are implemented in the R package nproc.
Persistent Identifierhttp://hdl.handle.net/10722/354153
ISSN
2023 Impact Factor: 4.3
2023 SCImago Journal Rankings: 2.796

 

DC FieldValueLanguage
dc.contributor.authorTong, Xin-
dc.contributor.authorXia, Lucy-
dc.contributor.authorWang, Jiacheng-
dc.contributor.authorFeng, Yang-
dc.date.accessioned2025-02-07T08:46:48Z-
dc.date.available2025-02-07T08:46:48Z-
dc.date.issued2020-
dc.identifier.citationJournal of Machine Learning Research, 2020, v. 21-
dc.identifier.issn1532-4435-
dc.identifier.urihttp://hdl.handle.net/10722/354153-
dc.description.abstractThe Neyman-Pearson (NP) paradigm in binary classification seeks classifiers that achieve a minimal type II error while enforcing the prioritized type I error controlled under some user-specified level α. This paradigm serves naturally in applications such as severe disease diagnosis and spam detection, where people have clear priorities among the two error types. Recently, Tong et al. (2018) proposed a nonparametric umbrella algorithm that adapts all scoring-type classification methods (e.g., logistic regression, support vector machines, random forest) to respect the given type I error (i.e., conditional probability of classifying a class 0 observation as class 1 under the 0-1 coding) upper bound α with high probability, without specific distributional assumptions on the features and the responses. Universal the umbrella algorithm is, it demands an explicit minimum sample size requirement on class 0, which is often the more scarce class, such as in rare disease diagnosis applications. In this work, we employ the parametric linear discriminant analysis (LDA) model and propose a new parametric thresholding algorithm, which does not need the minimum sample size requirements on class 0 observations and thus is suitable for small sample applications such as rare disease diagnosis. Leveraging both the existing nonparametric and the newly proposed parametric thresholding rules, we propose four LDA-based NP classifiers, for both low- and high-dimensional settings. On the theoretical front, we prove NP oracle inequalities for one proposed classifier, where the rate for excess type II error benefits from the explicit parametric model assumption. Furthermore, as NP classifiers involve a sample splitting step of class 0 observations, we construct a new adaptive sample splitting scheme that can be applied universally to NP classifiers, and this adaptive strategy reduces the type II error of these classifiers. The proposed NP classifiers are implemented in the R package nproc.-
dc.languageeng-
dc.relation.ispartofJournal of Machine Learning Research-
dc.subjectAdaptive splitting-
dc.subjectAsymmetric error-
dc.subjectClassification-
dc.subjectLinear discriminant analysis (LDA)-
dc.subjectMinimum sample size requirement-
dc.subjectNeyman-Pearson (NP) paradigm-
dc.subjectNP oracle inequalities-
dc.subjectNP umbrella algorithm-
dc.titleNeyman-pearson classification: Parametrics and sample size requirement-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85086799828-
dc.identifier.volume21-
dc.identifier.eissn1533-7928-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats