Show simple item record

dc.contributor.authorReben, Alexander James
dc.contributor.authorParadiso, Joseph A
dc.date.accessioned2013-09-12T20:19:14Z
dc.date.available2013-09-12T20:19:14Z
dc.date.issued2011
dc.identifier.isbn9781450306164
dc.identifier.urihttp://hdl.handle.net/1721.1/80698
dc.description.abstractDocumentaries are typically captured in a very structured way, using teams to film and interview people. We developed an autonomous method for capturing structured cinéma vérité style documentaries through an interactive robotic camera, which was used as a mobile physical agent to facilitate interaction and story gathering within a ubiquitous media framework. We sent this robot out to autonomously gather human narrative about its environment. The robot had a specific story capture goal and leveraged humans to attain that goal. The robot collected a 1st person view of stories unfolding in real life, and as it engaged with its subjects via a preset dialog, these media clips were intrinsically structured. We evaluated this agent by way of determining "complete" vs. "incomplete" interactions. "Complete" interactions were those that generated viable and interesting videos, which could be edited together into a larger narrative. It was found that 30% of the interactions captured were "complete" interactions. Our results suggested that changes in the system would only produce incrementally more "complete" interactions, as external factors like natural bias or busyness of the user come into play. The types of users who encountered the robot were fairly polar; either they wanted to interact or did not - very few partial interactions went on for more than 1 minute. Users who partially interacted with the robot were found to treat it rougher than those who completed the full interaction. It was also determined that this type of limited-interaction system is best suited for short-term encounters. At the end of the study, a short cinéma vérité documentary showcasing the people and activity in our building was easily produced from the structured videos that were captured, indicating the utility of this approach.en_US
dc.description.sponsorshipMassachusetts Institute of Technology. Media Laboratoryen_US
dc.language.isoen_US
dc.publisherAssociation for Computing Machinery (ACM)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1145/2072298.2071902en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alike 3.0en_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/en_US
dc.sourceMIT Web Domainen_US
dc.titleA mobile interactive robot for gathering structured social videoen_US
dc.typeArticleen_US
dc.identifier.citationAlexander Reben and Joseph Paradiso. 2011. A mobile interactive robot for gathering structured social video. In Proceedings of the 19th ACM international conference on Multimedia (MM '11). ACM, New York, NY, USA, 917-920.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Media Laboratoryen_US
dc.contributor.mitauthorReben, Alexander Jamesen_US
dc.contributor.mitauthorParadiso, Joseph A.en_US
dc.relation.journalProceedings of the 19th ACM international conference on Multimedia - MM '11en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsReben, Alexander; Paradiso, Josephen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-0719-7104
mit.licenseOPEN_ACCESS_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record