Show simple item record

dc.contributor.advisorBarry L. Vercoe.en_US
dc.contributor.authorScheirer, Eric Daviden_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Architecture.en_US
dc.date.accessioned2006-02-02T18:47:03Z
dc.date.available2006-02-02T18:47:03Z
dc.date.copyright2000en_US
dc.date.issued2000en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/31091
dc.descriptionThesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Architecture, 2000.en_US
dc.descriptionIncludes bibliographical references (p. [235]-248).en_US
dc.description.abstractWhen human listeners are confronted with musical sounds, they rapidly and automatically orient themselves in the music. Even musically untrained listeners have an exceptional ability to make rapid judgments about music from very short examples, such as determining the music's style, performer, beat, complexity, and emotional impact. However, there are presently no theories of music perception that can explain this behavior, and it has proven very difficult to build computer music-analysis tools with similar capabilities. This dissertation examines the psychoacoustic origins of the early stages of music listening in humans, using both experimental and computer-modeling approaches. The results of this research enable the construction of automatic machine-listening systems that can make human-like judgments about short musical stimuli. New models are presented that explain the perception of musical tempo, the perceived segmentation of sound scenes into multiple auditory images, and the extraction of musical features from complex musical sounds. These models are implemented as signal-processing and pattern-recognition computer programs, using the principle of understanding without separation. Two experiments with human listeners study the rapid assignment of high-level judgments to musical stimuli, and it is demonstrated that many of the experimental results can be explained with a multiple-regression model on the extracted musical features. From a theoretical standpoint, the thesis shows how theories of music perception can be grounded in a principled way upon psychoacoustic models in a computational-auditory-scene-analysis framework. Further, the perceptual theory presented is more relevant to everyday listeners and situations than are previous cognitive-structuralist approaches to music perception and cognition. From a practical standpoint, the various models form a set of computer signal-processing and pattern-recognition tools that can mimic human perceptual abilities on a variety of musical tasks such as tapping along with the beat, parsing music into sections, making semantic judgments about musical examples, and estimating the similarity of two pieces of music.en_US
dc.description.statementofresponsibilityEric D. Scheirer.en_US
dc.format.extent248 p.en_US
dc.format.extent20993009 bytes
dc.format.extent21025669 bytes
dc.format.mimetypeapplication/pdf
dc.format.mimetypeapplication/pdf
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582
dc.subjectArchitecture.en_US
dc.titleMusic-listening systemsen_US
dc.typeThesisen_US
dc.description.degreePh.D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Dept. of Architecture.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Architecture
dc.identifier.oclc47799227en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record