Show simple item record

dc.contributor.advisorTrevor Darrell and John W. Fisher.en_US
dc.contributor.authorSiracusa, Michael Richard, 1980-en_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2006-03-24T18:27:20Z
dc.date.available2006-03-24T18:27:20Z
dc.date.copyright2004en_US
dc.date.issued2005en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/30182
dc.descriptionThesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2005.en_US
dc.descriptionIncludes bibliographical references (p. 183-186).en_US
dc.description.abstractCurrently, most dialog systems are restricted to single user environments. This thesis aims to promote an un-tethered multi-person dialog system by exploring approaches to help solve the speech correspondence problem (i.e. who, if anyone, is currently speaking). We adopt a statistical framework in which this problem is put in the form of a hypothesis test and focus on the subtask of discriminating between associated and non-associated audio-visual observations. Various methods for modeling our audio-visual observations and ways of carrying out this test are studied and their relative performance is compared. We discuss issues that arise from the inherently high dimensional nature of audio-visual data and address these issues by exploring different techniques for finding low-dimensional informative subspaces in which we can perform our hypothesis tests. We study our ability to learn a person-specific as well as a generic model for measuring audio-visual association and evaluate performance oil multiple subjects taken from MIT's AVTIMIT database.en_US
dc.description.statementofresponsibilityby Michael Richard Siracusa.en_US
dc.format.extent186 p.en_US
dc.format.extent8400105 bytes
dc.format.extent8423575 bytes
dc.format.mimetypeapplication/pdf
dc.format.mimetypeapplication/pdf
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleStatistical modeling and analysis of audio-visual association in speechen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc60679852en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record