Show simple item record

dc.contributor.authorGan, Chuang
dc.contributor.authorHuang, Deng
dc.contributor.authorZhao, Hang
dc.contributor.authorTenenbaum, Joshua B
dc.contributor.authorTorralba, Antonio
dc.date.accessioned2021-04-06T16:27:33Z
dc.date.available2021-04-06T16:27:33Z
dc.date.issued2020-08
dc.date.submitted2020-06
dc.identifier.isbn9781728171685
dc.identifier.urihttps://hdl.handle.net/1721.1/130393
dc.description.abstractRecent deep learning approaches have achieved impressive performance on visual sound separation tasks. However, these approaches are mostly built on appearance and optical flow like motion feature representations, which exhibit limited abilities to find the correlations between audio signals and visual points, especially when separating multiple instruments of the same types, such as multiple violins in a scene. To address this, we propose ''Music Gesture,' a keypoint-based structured representation to explicitly model the body and finger movements of musicians when they perform music. We first adopt a context-aware graph network to integrate visual semantic context with body dynamics and then apply an audio-visual fusion model to associate body movements with the corresponding audio signals. Experimental results on three music performance datasets show: 1) strong improvements upon benchmark metrics for hetero-musical separation tasks (i.e. different instruments); 2) new ability for effective homo-musical separation for piano, flute, and trumpet duets, which to our best knowledge has never been achieved with alternative methods.en_US
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/cvpr42600.2020.01049en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleMusic Gesture for Visual Sound Separationen_US
dc.typeArticleen_US
dc.identifier.citationGan, Chuang et al. "Music Gesture for Visual Sound Separation." 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2020, Seattle, Washingston, Institute of Electrical and Electronics Engineers, August 2020. © 2020 IEEEen_US
dc.contributor.departmentMIT-IBM Watson AI Laben_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.relation.journal2020 IEEE/CVF Conference on Computer Vision and Pattern Recognitionen_US
dc.eprint.versionOriginal manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2021-01-28T15:51:14Z
dspace.orderedauthorsGan, C; Huang, D; Zhao, H; Tenenbaum, JB; Torralba, Aen_US
dspace.date.submission2021-01-28T15:51:19Z
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record