Show simple item record

dc.contributor.authorZhao, Anqi
dc.contributor.authorFeng, Yang
dc.contributor.authorWang, Lie
dc.contributor.authorTong, Xin
dc.date.accessioned2018-07-11T17:36:56Z
dc.date.available2018-07-11T17:36:56Z
dc.date.issued2016-01
dc.identifier.issn1532-4435
dc.identifier.issn1533-7928
dc.identifier.urihttp://hdl.handle.net/1721.1/116907
dc.description.abstractMost existing binary classification methods target on the optimization of the overall classification risk and may fail to serve some real-world applications such as cancer diagnosis, where users are more concerned with the risk of misclassifying one specific class than the other. Neyman-Pearson (NP) paradigm was introduced in this context as a novel statistical framework for handling asymmetric type I/II error priorities. It seeks classifiers with a minimal type II error and a constrained type I error under a user specified level. This article is the first attempt to construct classifiers with guaranteed theoretical performance under the NP paradigm in high-dimensional settings. Based on the fundamental Neyman-Pearson Lemma, we used a plug-in approach to construct NP-Type classifiers for Naive Bayes models. The proposed classifiers satisfy the NP oracle inequalities, which are natural NP paradigm counterparts of the oracle inequalities in classical binary classification. Besides their desirable theoretical properties, we also demonstrated their numerical advantages in prioritized error control via both simulation and real data studies.en_US
dc.publisherJMLR, Inc.en_US
dc.relation.isversionofhttps://dl.acm.org/citation.cfm?id=3053494en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceJournal of Machine Learning Researchen_US
dc.titleNeyman-pearson classiffication under high-dimensional settingsen_US
dc.typeArticleen_US
dc.identifier.citationZhao, Anqi et al. "Neyman-Pearson Classification under High-Dimensional Settings." Journal of Machine Learning Research, 17, 2016, pp. 7469-7507en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mathematicsen_US
dc.contributor.mitauthorWang, Lie
dc.relation.journalJournal of Machine Learning Researchen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2018-02-23T16:03:09Z
dspace.orderedauthorsZhao, Anqi; Feng, Yang; Wang, Lie; Tong, Xinen_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0003-3582-8898
mit.licensePUBLISHER_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record