Show simple item record

dc.contributor.advisorTod Machover.en_US
dc.contributor.authorNattinger, Elena Jessopen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Architecture. Program in Media Arts and Sciences.en_US
dc.date.accessioned2015-02-25T17:12:14Z
dc.date.available2015-02-25T17:12:14Z
dc.date.copyright2014en_US
dc.date.issued2014en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/95589
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2014.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 193-199).en_US
dc.description.abstractPerforming artists have frequently used technology to sense and extend the body's natural expressivity via live control of multimedia. However, the sophistication, emotional content, and variety of expression possible through the original physical channels of voice and movement are generally not captured or represented by these technologies and thus cannot be intuitively transferred from body to digital media. Additionally, relevant components of expression vary between different artists, performance pieces, and output modalities, such that any single model for describing movement and the voice cannot be meaningful in all contexts. This dissertation presents a new framework for flexible parametric abstraction of expression in vocal and physical performance, the Expressive Performance Extension Framework. This framework includes a set of questions and principles to guide the development of new extended performance works and systems for performance extension, particularly those incorporating machine learning techniques. Second, this dissertation outlines the design of a multi-layered computational workflow that uses machine learning for the analysis and recognition of expressive qualities of movement and voice. Third, it introduces a performance extension toolkit, the Expressive Performance Extension System, that integrates key aspects of the theoretical framework and computational workflow into live performance contexts. This system and these methodologies have been tested through the creation of three performance and installation works: a public installation extending expressive physical movement (the Powers Sensor Chair), an installation exploring the expressive voice (Vocal Vibrations), and a set of performances extending the voice and body (Crenulations and Excursions and Temporal Excursions). This work lays the groundwork for systems that can be true extensions of and complements to a live performance, by recognizing and responding to subtleties of timing, articulation, and expression that make each performance fundamentally unrepeatable and unique.en_US
dc.description.statementofresponsibilityby Elena Jessop Nattinger.en_US
dc.format.extent199 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectArchitecture. Program in Media Arts and Sciences.en_US
dc.titleThe body parametric : abstraction of vocal and physical expression in performanceen_US
dc.title.alternativeAbstraction of vocal and physical expression in performanceen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)
dc.identifier.oclc903653370en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record