Show simple item record

dc.contributor.authorOh, Tae-Hyun
dc.contributor.authorJaroensri, Ronnachai
dc.contributor.authorKim, Changil
dc.contributor.authorElgharib, Mohamed
dc.contributor.authorDurand, Frédo
dc.contributor.authorFreeman, William T.
dc.contributor.authorMatusik, Wojciech
dc.date.accessioned2021-11-05T13:45:08Z
dc.date.available2021-11-05T13:45:08Z
dc.date.issued2018
dc.identifier.issn0302-9743
dc.identifier.issn1611-3349
dc.identifier.urihttps://hdl.handle.net/1721.1/137455
dc.description.abstract© 2018, Springer Nature Switzerland AG. Video motion magnification techniques allow us to see small motions previously invisible to the naked eyes, such as those of vibrating airplane wings, or swaying buildings under the influence of the wind. Because the motion is small, the magnification results are prone to noise or excessive blurring. The state of the art relies on hand-designed filters to extract representations that may not be optimal. In this paper, we seek to learn the filters directly from examples using deep convolutional neural networks. To make training tractable, we carefully design a synthetic dataset that captures small motion well, and use two-frame input for training. We show that the learned filters achieve high-quality results on real videos, with less ringing artifacts and better noise characteristics than previous methods. While our model is not trained with temporal filters, we found that the temporal filters can be used with our extracted representations up to a moderate magnification, enabling a frequency-based motion selection. Finally, we analyze the learned filters and show that they behave similarly to the derivative filters used in previous works. Our code, trained model, and datasets will be available online.en_US
dc.language.isoen
dc.publisherSpringer International Publishingen_US
dc.relation.isversionof10.1007/978-3-030-01225-0_39en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceMIT web domainen_US
dc.titleLearning-Based Video Motion Magnificationen_US
dc.typeArticleen_US
dc.identifier.citationOh, Tae-Hyun, Jaroensri, Ronnachai, Kim, Changil, Elgharib, Mohamed, Durand, Frédo et al. 2018. "Learning-Based Video Motion Magnification."
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2019-05-28T12:31:35Z
dspace.date.submission2019-05-28T12:31:37Z
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record