Do We Need More Training Data?
Author(s)
Zhu, Xiangxin; Ramanan, Deva; Fowlkes, Charless C.; Vondrick, Carl Martin
Download11263_2015_812_ReferencePDF.pdf (10.34Mb)
PUBLISHER_POLICY
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
Datasets for training object recognition systems are steadily increasing in size. This paper investigates the question of whether existing detectors will continue to improve as data grows, or saturate in performance due to limited model complexity and the Bayes risk associated with the feature spaces in which they operate. We focus on the popular paradigm of discriminatively trained templates defined on oriented gradient features. We investigate the performance of mixtures of templates as the number of mixture components and the amount of training data grows. Surprisingly, even with proper treatment of regularization and “outliers”, the performance of classic mixture models appears to saturate quickly ( ∼10 templates and ∼100 positive training examples per template). This is not a limitation of the feature space as compositional mixtures that share template parameters via parts and that can synthesize new templates not encountered during training yield significantly better performance. Based on our analysis, we conjecture that the greatest gains in detection performance will continue to derive from improved representations and learning algorithms that can make efficient use of large datasets.
Date issued
2015-03Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
International Journal of Computer Vision
Publisher
Springer US
Citation
Zhu, Xiangxin et al. “Do We Need More Training Data?” International Journal of Computer Vision 119.1 (2016): 76–92.
Version: Author's final manuscript
ISSN
0920-5691
1573-1405