Show simple item record

dc.contributor.authorRubinstein, Michael
dc.contributor.authorLiu, Ce
dc.contributor.authorFreeman, William T.
dc.date.accessioned2017-02-15T16:20:20Z
dc.date.available2017-02-15T16:20:20Z
dc.date.issued2016-03
dc.date.submitted2013-07
dc.identifier.issn0920-5691
dc.identifier.issn1573-1405
dc.identifier.urihttp://hdl.handle.net/1721.1/106941
dc.description.abstractWe present a principled framework for inferring pixel labels in weakly-annotated image datasets. Most previous, example-based approaches to computer vision rely on a large corpus of densely labeled images. However, for large, modern image datasets, such labels are expensive to obtain and are often unavailable. We establish a large-scale graphical model spanning all labeled and unlabeled images, then solve it to infer pixel labels jointly for all images in the dataset while enforcing consistent annotations over similar visual patterns. This model requires significantly less labeled data and assists in resolving ambiguities by propagating inferred annotations from images with stronger local visual evidences to images with weaker local evidences. We apply our proposed framework to two computer vision problems, namely image annotation with semantic segmentation, and object discovery and co-segmentation (segmenting multiple images containing a common object). Extensive numerical evaluations and comparisons show that our method consistently outperforms the state-of-the-art in automatic annotation and semantic labeling, while requiring significantly less labeled data. In contrast to previous co-segmentation techniques, our method manages to discover and segment objects well even in the presence of substantial amounts of noise images (images not containing the common object), as typical for datasets collected from Internet search.en_US
dc.publisherSpringer USen_US
dc.relation.isversionofhttp://dx.doi.org/10.1007/s11263-016-0894-5en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/en_US
dc.sourceSpringer USen_US
dc.titleJoint Inference in Weakly-Annotated Image Datasets via Dense Correspondenceen_US
dc.typeArticleen_US
dc.identifier.citationRubinstein, Michael, Ce Liu, and William T. Freeman. “Joint Inference in Weakly-Annotated Image Datasets via Dense Correspondence.” International Journal of Computer Vision 119.1 (2016): 23–45.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.mitauthorFreeman, William T.
dc.relation.journalInternational Journal of Computer Visionen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2017-02-02T15:21:12Z
dc.language.rfc3066en
dc.rights.holderThe Author(s)
dspace.orderedauthorsRubinstein, Michael; Liu, Ce; Freeman, William T.en_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-2231-7995
dspace.mitauthor.errortrue
mit.licensePUBLISHER_CCen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record