Show simple item record

dc.contributor.authorNorthcutt, Curtis
dc.contributor.authorJiang, Lu
dc.contributor.authorChuang, Isaac
dc.date.accessioned2022-06-10T16:49:05Z
dc.date.available2022-06-10T16:49:05Z
dc.date.issued2021
dc.identifier.urihttps://hdl.handle.net/1721.1/142946
dc.description.abstract<jats:p>Learning exists in the context of data, yet notions of confidence typically focus on model predictions, not label quality. Confident learning (CL) is an alternative approach which focuses instead on label quality by characterizing and identifying label errors in datasets, based on the principles of pruning noisy data, counting with probabilistic thresholds to estimate noise, and ranking examples to train with confidence. Whereas numerous studies have developed these principles independently, here, we combine them, building on the assumption of a class-conditional noise process to directly estimate the joint distribution between noisy (given) labels and uncorrupted (unknown) labels. This results in a generalized CL which is provably consistent and experimentally performant. We present sufficient conditions where CL exactly finds label errors, and show CL performance exceeding seven recent competitive approaches for learning with noisy labels on the CIFAR dataset. Uniquely, the CL framework is not coupled to a specific data modality or model (e.g., we use CL to find several label errors in the presumed error-free MNIST dataset and improve sentiment classification on text data in Amazon Reviews). We also employ CL on ImageNet to quantify ontological class overlap (e.g., estimating 645 missile images are mislabeled as their parent class projectile), and moderately increase model accuracy (e.g., for ResNet) by cleaning data prior to training. These results are replicable using the open-source cleanlab release.</jats:p>en_US
dc.language.isoen
dc.publisherAI Access Foundationen_US
dc.relation.isversionof10.1613/JAIR.1.12125en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceJournal of Artificial Intelligence Researchen_US
dc.titleConfident Learning: Estimating Uncertainty in Dataset Labelsen_US
dc.typeArticleen_US
dc.identifier.citationNorthcutt, Curtis, Jiang, Lu and Chuang, Isaac. 2021. "Confident Learning: Estimating Uncertainty in Dataset Labels." Journal of Artificial Intelligence Research, 70.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.relation.journalJournal of Artificial Intelligence Researchen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2022-06-10T16:38:40Z
dspace.orderedauthorsNorthcutt, C; Jiang, L; Chuang, Ien_US
dspace.date.submission2022-06-10T16:38:54Z
mit.journal.volume70en_US
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record