Show simple item record

dc.contributor.advisorW. Eric L. Grimson.en_US
dc.contributor.authorNiu, Chaoweien_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2010-09-02T17:20:27Z
dc.date.available2010-09-02T17:20:27Z
dc.date.copyright2010en_US
dc.date.issued2010en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/58278
dc.descriptionThesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (p. 121-131).en_US
dc.description.abstractWe present a systematic framework to learn motion patterns based on vehicle tracking data captured by multiple non-overlapping uncalibrated cameras. We assume that the tracks from individual cameras are available. We define the key problems related to the multi-camera surveillance system and present solutions to these problems: learning the topology of the network, constructing tracking correspondences between different views, learning the activity clusters over global views and finally detecting abnormal events. First, we present a weighted cross correlation model to learn the topology of the network without solving correspondence in the first place. We use estimates of normalized color and apparent size to measure similarity of object appearance between different views. This information is used to temporally correlated observations, allowing us to infer possible links between disjoint views, and to estimate the associated transition time. Based on the learned cross correlation coefficient, the network topology can be fully recovered. Then, we present a MAP framework to match two objects along their tracks from non overlapping camera views and discuss how the learned topology can reduce the correspondence search space dramatically. We propose to learn the color transformation in [iota][alpha][beta] space to compensate for the varying illumination conditions across different views, and learn the inter-camera time transition and the shape/size transformation between different views.en_US
dc.description.abstract(cont.) After we model the correspondence probability for observations captured by different source/sinks, we adopt a probabilistic framework to use this correspondence probability in a principled manner. Tracks are assigned by estimating the correspondences which maximize the posterior probabilities (MAP) using the Hungarian algorithm. After establishing the correspondence, we have a set of stitched trajectories, in which elements from each camera can be combined with observations in multiple subsequent cameras generated by the same object. Finally, we show how to learn the activity clusters and detect abnormal activities using the mixture of unigram model with the stitched trajectories as input. We adopt a bag - of - words presentation, and present a Bayesian probabilistic approach in which trajectories are represented by a mixture model. This model can classify trajectories into different activity clusters, and gives representations of both new trajectories and abnormal trajectories.en_US
dc.description.statementofresponsibilityby Chaowei Niu.en_US
dc.format.extent131 p.en_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titlePatterns of motion in non-overlapping networks using vehicle tracking dataen_US
dc.typeThesisen_US
dc.description.degreePh.D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc631236067en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record