Joint Inference in Weakly-Annotated Image Datasets via Dense Correspondence
Author(s)
Rubinstein, Michael; Liu, Ce; Freeman, William T.
Download11263_2016_Article_894.pdf (8.335Mb)
PUBLISHER_CC
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
We present a principled framework for inferring pixel labels in weakly-annotated image datasets. Most previous, example-based approaches to computer vision rely on a large corpus of densely labeled images. However, for large, modern image datasets, such labels are expensive to obtain and are often unavailable. We establish a large-scale graphical model spanning all labeled and unlabeled images, then solve it to infer pixel labels jointly for all images in the dataset while enforcing consistent annotations over similar visual patterns. This model requires significantly less labeled data and assists in resolving ambiguities by propagating inferred annotations from images with stronger local visual evidences to images with weaker local evidences. We apply our proposed framework to two computer vision problems, namely image annotation with semantic segmentation, and object discovery and co-segmentation (segmenting multiple images containing a common object). Extensive numerical evaluations and comparisons show that our method consistently outperforms the state-of-the-art in automatic annotation and semantic labeling, while requiring significantly less labeled data. In contrast to previous co-segmentation techniques, our method manages to discover and segment objects well even in the presence of substantial amounts of noise images (images not containing the common object), as typical for datasets collected from Internet search.
Date issued
2016-03Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence LaboratoryJournal
International Journal of Computer Vision
Publisher
Springer US
Citation
Rubinstein, Michael, Ce Liu, and William T. Freeman. “Joint Inference in Weakly-Annotated Image Datasets via Dense Correspondence.” International Journal of Computer Vision 119.1 (2016): 23–45.
Version: Final published version
ISSN
0920-5691
1573-1405