Show simple item record

dc.contributor.authorRubinstein, Michael
dc.contributor.authorLiu, Ce
dc.contributor.authorFreeman, William T.
dc.date.accessioned2015-12-16T03:27:25Z
dc.date.available2015-12-16T03:27:25Z
dc.date.issued2012-09
dc.identifier.isbn1-901725-46-4
dc.identifier.urihttp://hdl.handle.net/1721.1/100283
dc.description.abstractAlthough dense, long-range, motion trajectories are a prominent representation of motion in videos, there is still no good solution for constructing dense motion tracks in a truly long-range fashion. Ideally, we would want every scene feature that appears in multiple, not necessarily contiguous, parts of the sequence to be associated with the same motion track. Despite this reasonable and clearly stated objective, there has been surprisingly little work on general-purpose algorithms that can accomplish this task. State-of-the-art dense motion trackers process the sequence incrementally in a frame-by-frame manner, and associate, by design, features that disappear and reappear in the video, with different tracks, thereby losing important information of the long-term motion signal. In this paper, we strive towards an algorithm for producing generic long-range motion trajectories that are robust to occlusion, deformation and camera motion. We leverage accurate local (short-range) trajectories produced by current motion tracking methods and use them as an initial estimate for a global (long-range) solution. Our algorithm re-correlates the short trajectories and links them to form a long-range motion representation by formulating a combinatorial assignment problem that is defined and optimized globally over the entire sequence. This allows to correlate features in arbitrarily distinct parts of the sequence, as well as handle tracking ambiguities by spatiotemporal regularization. We report the results of the algorithm on both synthetic and natural videos, and evaluate the long-range motion representation for action recognition.en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Grant CGV 1111415)en_US
dc.description.sponsorshipNVIDIA Corporation (Fellowship)en_US
dc.language.isoen_US
dc.publisherBritish Machine Vision Associationen_US
dc.relation.isversionofhttp://dx.doi.org/10.5244/C.26.53en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceMIT web domainen_US
dc.titleTowards Longer Long-Range Motion Trajectoriesen_US
dc.typeArticleen_US
dc.identifier.citationRubinstein, Michael, Ce Liu, and William T. Freeman. “Towards Longer Long-Range Motion Trajectories.” British Machine Vision Conference 2012 (2012).en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.mitauthorRubinstein, Michaelen_US
dc.contributor.mitauthorFreeman, William T.en_US
dc.relation.journalProceedings of the British Machine Vision Conference 2012en_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsRubinstein, Michael; Liu, Ce; Freeman, William T.en_US
dc.identifier.orcidhttps://orcid.org/0000-0002-3707-3807
dc.identifier.orcidhttps://orcid.org/0000-0002-2231-7995
mit.licensePUBLISHER_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record