Show simple item record

dc.contributor.advisorBarry L. Vercoe.en_US
dc.contributor.authorChai, Wei, 1972-en_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Architecture. Program In Media Arts and Sciencesen_US
dc.date.accessioned2007-11-15T19:47:36Z
dc.date.available2007-11-15T19:47:36Z
dc.date.copyright2005en_US
dc.date.issued2005en_US
dc.identifier.urihttp://dspace.mit.edu/handle/1721.1/33878en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/33878
dc.descriptionThesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2005.en_US
dc.descriptionIncludes bibliographical references (p. 93-96).en_US
dc.description.abstractListening to music and perceiving its structure is a fairly easy task for humans, even for listeners without formal musical training. For example, we can notice changes of notes, chords and keys, though we might not be able to name them (segmentation based on tonality and harmonic analysis); we can parse a musical piece into phrases or sections (segmentation based on recurrent structural analysis); we can identify and memorize the main themes or the catchiest parts - hooks - of a piece (summarization based on hook analysis); we can detect the most informative musical parts for making certain judgments (detection of salience for classification). However, building computational models to mimic these processes is a hard problem. Furthermore, the amount of digital music that has been generated and stored has already become unfathomable. How to efficiently store and retrieve the digital content is an important real-world problem. This dissertation presents our research on automatic music segmentation, summarization and classification using a framework combining music cognition, machine learning and signal processing. It will inquire scientifically into the nature of human perception of music, and offer a practical solution to difficult problems of machine intelligence for automatic musical content analysis and pattern discovery.en_US
dc.description.abstract(cont.) Specifically, for segmentation, an HMM-based approach will be used for key change and chord change detection; and a method for detecting the self-similarity property using approximate pattern matching will be presented for recurrent structural analysis. For summarization, we will investigate the locations where the catchiest parts of a musical piece normally appear and develop strategies for automatically generating music thumbnails based on this analysis. For musical salience detection, we will examine methods for weighting the importance of musical segments based on the confidence of classification. Two classification techniques and their definitions of confidence will be explored. The effectiveness of all our methods will be demonstrated by quantitative evaluations and/or human experiments on complex real-world musical stimuli.en_US
dc.description.statementofresponsibilityby Wei Chai.en_US
dc.format.extent96 p.en_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/33878en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582
dc.subjectArchitecture. Program In Media Arts and Sciencesen_US
dc.titleAutomated analysis of musical structureen_US
dc.typeThesisen_US
dc.description.degreePh.D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Dept. of Architecture. Program In Media Arts and Sciencesen_US
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)
dc.identifier.oclc66464820en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record