| dc.contributor.author | Barbu, A | |
| dc.contributor.author | Mayo, D | |
| dc.contributor.author | Alverio, J | |
| dc.contributor.author | Luo, W | |
| dc.contributor.author | Wang, C | |
| dc.contributor.author | Gutfreund, D | |
| dc.contributor.author | Tenenbaum, J | |
| dc.contributor.author | Katz, B | |
| dc.date.accessioned | 2021-12-07T14:17:49Z | |
| dc.date.available | 2021-12-07T14:17:49Z | |
| dc.date.issued | 2019-01-01 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/138345 | |
| dc.description.abstract | © 2019 Neural information processing systems foundation. All rights reserved. We collect a large real-world test set, ObjectNet, for object recognition with controls where object backgrounds, rotations, and imaging viewpoints are random. Most scientific experiments have controls, confounds which are removed from the data, to ensure that subjects cannot perform a task by exploiting trivial correlations in the data. Historically, large machine learning and computer vision datasets have lacked such controls. This has resulted in models that must be fine-tuned for new datasets and perform better on datasets than in real-world applications. When tested on ObjectNet, object detectors show a 40-45% drop in performance, with respect to their performance on other benchmarks, due to the controls for biases. Controls make ObjectNet robust to fine-tuning showing only small performance increases. We develop a highly automated platform that enables gathering datasets with controls by crowdsourcing image capturing and annotation. ObjectNet is the same size as the ImageNet test set (50,000 images), and by design does not come paired with a training set in order to encourage generalization. The dataset is both easier than ImageNet - objects are largely centered and unoccluded - and harder, due to the controls. Although we focus on object recognition here, data with controls can be gathered at scale using automated tools throughout machine learning to generate datasets that exercise models in new ways thus providing valuable feedback to researchers. This work opens up new avenues for research in generalizable, robust, and more human-like computer vision and in creating datasets where results are predictive of real-world performance. | en_US |
| dc.language.iso | en | |
| dc.relation.isversionof | https://papers.nips.cc/paper/2019/hash/97af07a14cacba681feacf3012730892-Abstract.html | en_US |
| dc.rights | Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. | en_US |
| dc.source | Neural Information Processing Systems (NIPS) | en_US |
| dc.title | ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Barbu, A, Mayo, D, Alverio, J, Luo, W, Wang, C et al. 2019. "ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models." Advances in Neural Information Processing Systems, 32. | |
| dc.contributor.department | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory | |
| dc.contributor.department | Center for Brains, Minds, and Machines | |
| dc.contributor.department | MIT-IBM Watson AI Lab | |
| dc.relation.journal | Advances in Neural Information Processing Systems | en_US |
| dc.eprint.version | Final published version | en_US |
| dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
| eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
| dc.date.updated | 2021-12-07T14:15:26Z | |
| dspace.orderedauthors | Barbu, A; Mayo, D; Alverio, J; Luo, W; Wang, C; Gutfreund, D; Tenenbaum, J; Katz, B | en_US |
| dspace.date.submission | 2021-12-07T14:15:31Z | |
| mit.journal.volume | 32 | en_US |
| mit.license | PUBLISHER_POLICY | |
| mit.metadata.status | Authority Work and Publication Information Needed | en_US |