dc.contributor.author | Isola, Phillip | |
dc.contributor.author | McDermott, Josh | |
dc.contributor.author | Adelson, Edward H. | |
dc.contributor.author | Freeman, William T. | |
dc.contributor.author | Torralba, Antonio | |
dc.contributor.author | Owens, Andrew Hale | |
dc.date.accessioned | 2017-12-08T17:59:29Z | |
dc.date.available | 2017-12-08T17:59:29Z | |
dc.date.issued | 2016-06 | |
dc.identifier.isbn | 978-1-4673-8851-1 | |
dc.identifier.issn | 1063-6919 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/112659 | |
dc.description.abstract | Objects make distinctive sounds when they are hit or scratched. These sounds reveal aspects of an object's material properties, as well as the actions that produced them. In this paper, we propose the task of predicting what sound an object makes when struck as a way of studying physical interactions within a visual scene. We present an algorithm that synthesizes sound from silent videos of people hitting and scratching objects with a drumstick. This algorithm uses a recurrent neural network to predict sound features from videos and then produces a waveform from these features with an example-based synthesis procedure. We show that the sounds predicted by our model are realistic enough to fool participants in a "real or fake" psychophysical experiment, and that they convey significant information about material properties and physical interactions. | en_US |
dc.description.sponsorship | National Science Foundation (U.S.) (grant 6924450) | en_US |
dc.description.sponsorship | National Science Foundation (U.S.) (grant 6926677) | en_US |
dc.description.sponsorship | Shell Oil Company | en_US |
dc.description.sponsorship | Microsoft Corporation | en_US |
dc.language.iso | en_US | |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en_US |
dc.relation.isversionof | http://dx.doi.org/10.1109/CVPR.2016.264 | en_US |
dc.rights | Creative Commons Attribution-Noncommercial-Share Alike | en_US |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | en_US |
dc.source | arXiv | en_US |
dc.title | Visually Indicated Sounds | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Owens, Andrew, Phillip Isola, Josh McDermott, Antonio Torralba, Edward H. Adelson, and William T. Freeman. “Visually Indicated Sounds.” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2016). © 2016 Institute of Electrical and Electronics Engineers (IEEE) | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
dc.contributor.mitauthor | Torralba, Antonio | |
dc.contributor.mitauthor | Owens, Andrew Hale | |
dc.relation.journal | IEEE Conference on Computer Vision and Pattern Recognition, 2016. CVPR 2016 | en_US |
dc.eprint.version | Original manuscript | en_US |
dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
dspace.orderedauthors | Owens, Andrew; Isola, Phillip; McDermott, Josh; Torralba, Antonio; Adelson, Edward H.; Freeman, William T. | en_US |
dspace.embargo.terms | N | en_US |
dc.identifier.orcid | https://orcid.org/0000-0003-4915-0256 | |
dc.identifier.orcid | https://orcid.org/0000-0001-9020-9593 | |
mit.license | OPEN_ACCESS_POLICY | en_US |
mit.metadata.status | Complete | |