Show simple item record

dc.contributor.advisorWilliam T. Freeman.en_US
dc.contributor.authorXue, Tianfanen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2018-03-02T22:21:27Z
dc.date.available2018-03-02T22:21:27Z
dc.date.copyright2017en_US
dc.date.issued2017en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/113978
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 115-126).en_US
dc.description.abstractMotion is important for understanding our visual world. The human visual system relies heavily on motion perception to recognize the movement of objects, to infer the 3D geometry of a scene, and to perceive the emotions of other people. Modern computer vision systems also use motion signals extracted from video sequences to infer high-level visual concepts, including human activities and abnormal events. Both human and computer visual systems try to perceive changes in the 3D physical world through its 2D projection, either on the image plane or on our retinas. The observed 2D pixel movement is the result of several factors. First, the image sensor might move, inducing egocentric motion, even when the scene is static. Second, the medium between objects and a camera might change and affect how light transmits from the objects to the sensor, like the shimmering in a hot-road mirage. Finally, the objects in a scene might move, either actively, like a person walking along a street, or passively, like a tree branch that is vibrating due to wind. All of these movements reveal information about our visual world. In this dissertation, we will discuss how to infer physical properties of our visual world from observed 2D movement. First, we show how to infer the depth of a scene from egocentric motion and use this to remove undesired visual obstructions. Second, we relate the slight wiggling motion due to refraction to the movement of hot air and infer the location and velocity of the airflow. Last, we illustrate how to infer the physical properties of objects, such as their deformation space or internal structure, from their motion.en_US
dc.description.statementofresponsibilityby Tianfan Xue.en_US
dc.format.extent22, 126 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleExploiting visual motion to understand our visual worlden_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc1023628250en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record