Show simple item record

dc.contributor.authorLong, Zhu
dc.contributor.authorChen, Yuanhao
dc.contributor.authorYuille, Alan
dc.date.accessioned2010-10-26T20:36:38Z
dc.date.available2010-10-26T20:36:38Z
dc.date.issued2010-06
dc.identifier.issn0162-8828
dc.identifier.otherINSPEC Accession Number: 11256708
dc.identifier.urihttp://hdl.handle.net/1721.1/59532
dc.description.abstractIn this paper, we address the tasks of detecting, segmenting, parsing, and matching deformable objects. We use a novel probabilistic object model that we call a hierarchical deformable template (HDT). The HDT represents the object by state variables defined over a hierarchy (with typically five levels). The hierarchy is built recursively by composing elementary structures to form more complex structures. A probability distribution-a parameterized exponential model-is defined over the hierarchy to quantify the variability in shape and appearance of the object at multiple scales. To perform inference-to estimate the most probable states of the hierarchy for an input image-we use a bottom-up algorithm called compositional inference. This algorithm is an approximate version of dynamic programming where approximations are made (e.g., pruning) to ensure that the algorithm is fast while maintaining high performance. We adapt the structure-perceptron algorithm to estimate the parameters of the HDT in a discriminative manner (simultaneously estimating the appearance and shape parameters). More precisely, we specify an exponential distribution for the HDT using a dictionary of potentials, which capture the appearance and shape cues. This dictionary can be large and so does not require handcrafting the potentials. Instead, structure-perceptron assigns weights to the potentials so that less important potentials receive small weights (this is like a ?soft? form of feature selection). Finally, we provide experimental evaluation of HDTs on different visual tasks, including detection, segmentation, matching (alignment), and parsing. We show that HDTs achieve state-of-the-art performance for these different tasks when evaluated on data sets with groundtruth (and when compared to alternative algorithms, which are typically specialized to each task).en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Grant no. 0413214)en_US
dc.description.sponsorshipW.M. Keck Foundationen_US
dc.language.isoen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/tpami.2009.65en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceIEEEen_US
dc.subjectHierarchyen_US
dc.subjectObject parsingen_US
dc.subjectSegmentationen_US
dc.subjectShape matchingen_US
dc.subjectShape representationen_US
dc.subjectStructured learningen_US
dc.titleLearning a Hierarchical Deformable Template for Rapid Deformable Object Parsingen_US
dc.typeArticleen_US
dc.identifier.citationLong Zhu, Yuanhao Chen, and A. Yuille. “Learning a Hierarchical Deformable Template for Rapid Deformable Object Parsing.” Pattern Analysis and Machine Intelligence, IEEE Transactions on 32.6 (2010): 1029-1043. © 2010, IEEEen_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.approverZhu, Long
dc.contributor.mitauthorLong, Zhu
dc.relation.journalIEEE transactions on pattern analysis and machine intelligenceen_US
dc.eprint.versionFinal published versionen_US
dc.identifier.pmid20431129
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dspace.orderedauthorsLong Zhu; Yuanhao Chen; Yuille, Alanen
mit.licensePUBLISHER_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record