dc.contributor.author | Vlasic, Daniel | |
dc.contributor.author | Baran, Ilya | |
dc.contributor.author | Matusik, Wojciech | |
dc.contributor.author | Popovic, Jovan | |
dc.date.accessioned | 2015-12-14T23:12:16Z | |
dc.date.available | 2015-12-14T23:12:16Z | |
dc.date.issued | 2008-08 | |
dc.identifier.issn | 07300301 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/100254 | |
dc.description.abstract | Details in mesh animations are difficult to generate but they have great impact on visual quality. In this work, we demonstrate a practical software system for capturing such details from multi-view video recordings. Given a stream of synchronized video images that record a human performance from multiple viewpoints and an articulated template of the performer, our system captures the motion of both the skeleton and the shape. The output mesh animation is enhanced with the details observed in the image silhouettes. For example, a performance in casual loose-fitting clothes will generate mesh animations with flowing garment motions. We accomplish this with a fast pose tracking method followed by nonrigid deformation of the template to fit the silhouettes. The entire process takes less than sixteen seconds per frame and requires no markers or texture cues. Captured meshes are in full correspondence making them readily usable for editing operations including texturing, deformation transfer, and deformation model learning. | en_US |
dc.description.sponsorship | National Science Foundation (U.S.) (CCF-0541227) | en_US |
dc.description.sponsorship | National Science Foundation (U.S.). Graduate Research Fellowship | en_US |
dc.description.sponsorship | Adobe Systems | en_US |
dc.description.sponsorship | Pixar (Firm) | en_US |
dc.language.iso | en_US | |
dc.publisher | Association for Computing Machinery (ACM) | en_US |
dc.relation.isversionof | http://dx.doi.org/10.1145/1360612.1360696 | en_US |
dc.rights | Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. | en_US |
dc.source | MIT web domain | en_US |
dc.subject | Singapore-MIT Gambit Game Lab | en_US |
dc.title | Articulated mesh animation from multi-view silhouettes | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Vlasic, Daniel, Ilya Baran, Wojciech Matusik, and Jovan Popović. “Articulated Mesh Animation from Multi-View Silhouettes.” ACM Transactions on Graphics 27, no. 3 (August 1, 2008): 1. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
dc.contributor.mitauthor | Vlasic, Daniel | en_US |
dc.contributor.mitauthor | Baran, Ilya | en_US |
dc.contributor.mitauthor | Matusik, Wojciech | en_US |
dc.contributor.mitauthor | Popovic, Jovan | en_US |
dc.relation.journal | ACM Transactions on Graphics | en_US |
dc.eprint.version | Author's final manuscript | en_US |
dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
dspace.orderedauthors | Vlasic, Daniel; Baran, Ilya; Matusik, Wojciech; Popović, Jovan | en_US |
dc.identifier.orcid | https://orcid.org/0000-0003-0212-5643 | |
mit.license | PUBLISHER_POLICY | en_US |
mit.metadata.status | Complete | |