dc.contributor.author | Mandelblat-Cerf, Yael | |
dc.contributor.author | Fee, Michale S. | |
dc.date.accessioned | 2014-07-01T18:47:00Z | |
dc.date.available | 2014-07-01T18:47:00Z | |
dc.date.issued | 2014-05 | |
dc.date.submitted | 2013-11 | |
dc.identifier.issn | 1932-6203 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/88170 | |
dc.description.abstract | Songbirds have emerged as an excellent model system to understand the neural basis of vocal and motor learning. Like humans, songbirds learn to imitate the vocalizations of their parents or other conspecific “tutors.” Young songbirds learn by comparing their own vocalizations to the memory of their tutor song, slowly improving until over the course of several weeks they can achieve an excellent imitation of the tutor. Because of the slow progression of vocal learning, and the large amounts of singing generated, automated algorithms for quantifying vocal imitation have become increasingly important for studying the mechanisms underlying this process. However, methodologies for quantifying song imitation are complicated by the highly variable songs of either juvenile birds or those that learn poorly because of experimental manipulations. Here we present a method for the evaluation of song imitation that incorporates two innovations: First, an automated procedure for selecting pupil song segments, and, second, a new algorithm, implemented in Matlab, for computing both song acoustic and sequence similarity. We tested our procedure using zebra finch song and determined a set of acoustic features for which the algorithm optimally differentiates between similar and non-similar songs. | en_US |
dc.description.sponsorship | National Institutes of Health (U.S.) (R01 MH067105) | en_US |
dc.language.iso | en_US | |
dc.publisher | Public Library of Science | en_US |
dc.relation.isversionof | http://dx.doi.org/10.1371/journal.pone.0096484 | en_US |
dc.rights | Creative Commons Attribution | en_US |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | en_US |
dc.source | Public Library of Science | en_US |
dc.title | An Automated Procedure for Evaluating Song Imitation | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Mandelblat-Cerf, Yael, and Michale S. Fee. “An Automated Procedure for Evaluating Song Imitation.” Edited by Johan J. Bolhuis. PLoS ONE 9, no. 5 (May 8, 2014): e96484. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences | en_US |
dc.contributor.department | McGovern Institute for Brain Research at MIT | en_US |
dc.contributor.mitauthor | Fee, Michale S. | en_US |
dc.contributor.mitauthor | Mandelblat-Cerf, Yael | en_US |
dc.relation.journal | PLoS ONE | en_US |
dc.eprint.version | Final published version | en_US |
dc.type.uri | http://purl.org/eprint/type/JournalArticle | en_US |
eprint.status | http://purl.org/eprint/status/PeerReviewed | en_US |
dspace.orderedauthors | Mandelblat-Cerf, Yael; Fee, Michale S. | en_US |
dc.identifier.orcid | https://orcid.org/0000-0001-7539-1745 | |
mit.license | PUBLISHER_CC | en_US |
mit.metadata.status | Complete | |