| dc.contributor.author | Johnson, Micah K. | |
| dc.contributor.author | Dale, Kevin | |
| dc.contributor.author | Avidan, Shai | |
| dc.contributor.author | Pfister, Hanspeter | |
| dc.contributor.author | Freeman, William T. | |
| dc.contributor.author | Matusik, Wojciech | |
| dc.date.accessioned | 2011-11-29T21:07:16Z | |
| dc.date.available | 2011-11-29T21:07:16Z | |
| dc.date.issued | 2011-09 | |
| dc.date.submitted | 2010-06 | |
| dc.identifier.issn | 1077-2626 | |
| dc.identifier.issn | 1941-0506 | |
| dc.identifier.other | INSPEC Accession Number: 12157653 | |
| dc.identifier.uri | http://hdl.handle.net/1721.1/67310 | |
| dc.description.abstract | Computer-generated (CG) images have achieved high levels of realism. This realism, however, comes at the cost of long and expensive manual modeling, and often humans can still distinguish between CG and real images. We introduce a new data-driven approach for rendering realistic imagery that uses a large collection of photographs gathered from online repositories. Given a CG image, we retrieve a small number of real images with similar global structure. We identify corresponding regions between the CG and real images using a mean-shift cosegmentation algorithm. The user can then automatically transfer color, tone, and texture from matching regions to the CG image. Our system only uses image processing operations and does not require a 3D model of the scene, making it fast and easy to integrate into digital content creation workflows. Results of a user study show that our hybrid images appear more realistic than the originals. | en_US |
| dc.description.sponsorship | National Science Foundation (U.S.) (Grant No. PHY-0835713) | en_US |
| dc.description.sponsorship | National Science Foundation (U.S.) (NSF grant 0739255) | en_US |
| dc.description.sponsorship | Harvard University (John A. and Elizabeth S. Armstrong Fellowship) | en_US |
| dc.description.sponsorship | Adobe Systems | en_US |
| dc.language.iso | en_US | |
| dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
| dc.relation.isversionof | http://dx.doi.org/10.1109/tvcg.2010.233 | en_US |
| dc.rights | Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. | en_US |
| dc.source | IEEE | en_US |
| dc.title | CG2Real: Improving the Realism of Computer Generated Images using a Collection of Photographs | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Johnson, Micah K. et al. “CG2Real: Improving the Realism of Computer Generated Images Using a Large Collection of Photographs.” IEEE Transactions on Visualization and Computer Graphics 17 (2011): 1273-1285. © 2011 IEEE. | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
| dc.contributor.approver | Freeman, William T. | |
| dc.contributor.mitauthor | Freeman, William T. | |
| dc.contributor.mitauthor | Johnson, Micah K. | |
| dc.contributor.mitauthor | Matusik, Wojciech | |
| dc.relation.journal | IEEE Transactions on Visualization and Computer Graphics | en_US |
| dc.eprint.version | Final published version | en_US |
| dc.type.uri | http://purl.org/eprint/type/JournalArticle | en_US |
| eprint.status | http://purl.org/eprint/status/PeerReviewed | en_US |
| dspace.orderedauthors | Johnson, Micah K.; Dale, Kevin; Avidan, Shai; Pfister, Hanspeter; Freeman, William T.; Matusik, Wojciech | en |
| dc.identifier.orcid | https://orcid.org/0000-0003-0212-5643 | |
| dc.identifier.orcid | https://orcid.org/0000-0002-2231-7995 | |
| mit.license | PUBLISHER_POLICY | en_US |
| mit.metadata.status | Complete | |