Show simple item record

dc.contributor.authorBando, Yosuke
dc.contributor.authorRaskar, Ramesh
dc.contributor.authorHoltzman, Henry N.
dc.date.accessioned2013-08-21T18:41:54Z
dc.date.available2013-08-21T18:41:54Z
dc.date.issued2013-04
dc.date.submitted2012-08
dc.identifier.issn07300301
dc.identifier.urihttp://hdl.handle.net/1721.1/79901
dc.description.abstractRecently, several camera designs have been proposed for either making defocus blur invariant to scene depth or making motion blur invariant to object motion. The benefit of such invariant capture is that no depth or motion estimation is required to remove the resultant spatially uniform blur. So far, the techniques have been studied separately for defocus and motion blur, and object motion has been assumed 1D (e.g., horizontal). This article explores a more general capture method that makes both defocus blur and motion blur nearly invariant to scene depth and in-plane 2D object motion. We formulate the problem as capturing a time-varying light field through a time-varying light field modulator at the lens aperture, and perform 5D (4D light field + 1D time) analysis of all the existing computational cameras for defocus/motion-only deblurring and their hybrids. This leads to a surprising conclusion that focus sweep, previously known as a depth-invariant capture method that moves the plane of focus through a range of scene depth during exposure, is near-optimal both in terms of depth and 2D motion invariance and in terms of high-frequency preservation for certain combinations of depth and motion ranges. Using our prototype camera, we demonstrate joint defocus and motion deblurring for moving scenes with depth variation.en_US
dc.language.isoen_US
dc.publisherAssociation for Computing Machinery (ACM)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1145/2451236.2451239en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alike 3.0en_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/en_US
dc.sourceMIT Web Domainen_US
dc.titleNear-invariant blur for depth and 2D motion via time-varying light field analysisen_US
dc.typeArticleen_US
dc.identifier.citationYosuke Bando, Henry Holtzman, and Ramesh Raskar. 2013. Near-invariant blur for depth and 2D motion via time-varying light field analysis. ACM Trans. Graph. 32, 2, Article 13 (April 2013), 15 pages.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Media Laboratoryen_US
dc.contributor.departmentProgram in Media Arts and Sciences (Massachusetts Institute of Technology)en_US
dc.contributor.mitauthorBando, Yosukeen_US
dc.contributor.mitauthorHoltzman, Henry N.en_US
dc.contributor.mitauthorRaskar, Rameshen_US
dc.relation.journalACM Transactions on Graphicsen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsBando, Yosuke; Holtzman, Henry; Raskar, Rameshen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-9303-3658
dc.identifier.orcidhttps://orcid.org/0000-0002-3254-3224
mit.licenseOPEN_ACCESS_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record