Show simple item record

dc.contributor.authorWu, Hao-Yu
dc.contributor.authorRubinstein, Michael
dc.contributor.authorShih, Eugene
dc.contributor.authorGuttag, John V.
dc.contributor.authorDurand, Fredo
dc.contributor.authorFreeman, William T.
dc.date.accessioned2014-05-14T19:43:13Z
dc.date.available2014-05-14T19:43:13Z
dc.date.issued2012-07
dc.identifier.issn07300301
dc.identifier.urihttp://hdl.handle.net/1721.1/86955
dc.description.abstractOur goal is to reveal temporal variations in videos that are difficult or impossible to see with the naked eye and display them in an indicative manner. Our method, which we call Eulerian Video Magnification, takes a standard video sequence as input, and applies spatial decomposition, followed by temporal filtering to the frames. The resulting signal is then amplified to reveal hidden information. Using our method, we are able to visualize the flow of blood as it fills the face and also to amplify and reveal small motions. Our technique can run in real time to show phenomena occurring at the temporal frequencies selected by the user.en_US
dc.description.sponsorshipUnited States. Defense Advanced Research Projects Agency (DARPA SCENICC program)en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (NSF CGV-1111415)en_US
dc.description.sponsorshipQuanta Computer (Firm)en_US
dc.description.sponsorshipNvidia Corporation (Graduate Fellowship)en_US
dc.language.isoen_US
dc.publisherAssociation for Computing Machineryen_US
dc.relation.isversionofhttp://dx.doi.org/10.1145/2185520.2185561en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceMIT web domainen_US
dc.titleEulerian video magnification for revealing subtle changes in the worlden_US
dc.typeArticleen_US
dc.identifier.citationWu, Hao-Yu, Michael Rubinstein, Eugene Shih, John Guttag, Frédo Durand, and William Freeman. “Eulerian Video Magnification for Revealing Subtle Changes in the World.” ACM Transactions on Graphics 31, no. 4 (July 1, 2012): 1–8.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.mitauthorWu, Hao-Yuen_US
dc.contributor.mitauthorRubinstein, Michaelen_US
dc.contributor.mitauthorGuttag, John V.en_US
dc.contributor.mitauthorDurand, Fredoen_US
dc.contributor.mitauthorFreeman, William T.en_US
dc.relation.journalACM Transactions on Graphicsen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsWu, Hao-Yu; Rubinstein, Michael; Shih, Eugene; Guttag, John; Durand, Frédo; Freeman, Williamen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-3707-3807
dc.identifier.orcidhttps://orcid.org/0000-0003-0992-0906
dc.identifier.orcidhttps://orcid.org/0000-0001-9919-069X
dc.identifier.orcidhttps://orcid.org/0000-0002-2231-7995
mit.licenseOPEN_ACCESS_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record