Show simple item record

dc.contributor.advisorJoseph A. Paradiso.en_US
dc.contributor.authorLaibowitz, Matthew Joel, 1975-en_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Architecture. Program in Media Arts and Sciences.en_US
dc.date.accessioned2010-08-30T14:41:07Z
dc.date.available2010-08-30T14:41:07Z
dc.date.copyright2010en_US
dc.date.issued2010en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/57695
dc.descriptionThesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2010.en_US
dc.descriptionPage 232 blank. Cataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (p. 222-231).en_US
dc.description.abstractIn today's digital era, elements of anyone's life can be captured, by themselves or others, and be instantly broadcast. With little or no regulation on the proliferation of camera technology and the increasing use of video for social communication, entertainment, and education, we have undoubtedly entered the age of ubiquitous media. A world permeated by connected video devices promises a more democratized approach to mass-media culture, enabling anyone to create and distribute personalized content. While these advancements present a plethora of possibilities, they are not without potential negative effects, particularly with regard to privacy, ownership, and the general decrease in quality associated with minimal barriers to entry. This dissertation presents a first-of-its-kind research platform designed to investigate the world of ubiquitous video devices in order to confront inherent problems and create new media applications. This system takes a novel approach to the creation of user-generated, documentary video by augmenting a network of video cameras integrated into the environment with on-body sensing. The distributed video camera network can record the entire life of anyone within its coverage range and it will be shown that it, almost instantly, records more audio and video than can be viewed without prohibitive human resource cost.en_US
dc.description.abstract(cont.) This drives the need to develop a mechanism to automatically understand the raw audiovisual information in order to create a cohesive video output that is understandable, informative, and/or enjoyable to its human audience. We address this need with the SPINNER system. As humans, we are inherently able to transform disconnected occurrences and ideas into cohesive narratives as a method to understand, remember, and communicate meaning. The design of the SPINNER application and ubiquitous sensor platform is informed by research into narratology, in other words how stories are created from fragmented events. The SPINNER system maps low level sensor data from the wearable sensors to higher level social signal and body language information. This information is used to label the raw video data. The SPINNER system can then build a cohesive narrative by stitching together the appropriately labeled video segments. The results from three test runs are shown, each resulting in one or more automatically edited video piece. The creation of these videos is evaluated through review by their intended audience and by comparing the system to a human trying to perform similar actions. In addition, the mapping of the wearable sensor data to meaningful information is evaluated by comparing the calculated results to those from human observation of the actual video.en_US
dc.description.statementofresponsibilityby Mathew Laibowitz.en_US
dc.format.extent232 p.en_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectArchitecture. Program in Media Arts and Sciences.en_US
dc.titleCreating cohesive video with the narrative-informed use of ubiquitous wearable and imaging sensor networksen_US
dc.typeThesisen_US
dc.description.degreePh.D.en_US
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)
dc.identifier.oclc641266744en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record