| dc.contributor.author | Paul, Rohan | |
| dc.contributor.author | Feldman, Dan | |
| dc.contributor.author | Newman, Paul | |
| dc.contributor.author | Rus, Daniela L. | |
| dc.date.accessioned | 2016-01-29T00:23:44Z | |
| dc.date.available | 2016-01-29T00:23:44Z | |
| dc.date.issued | 2014-05 | |
| dc.identifier.isbn | 978-1-4799-3685-4 | |
| dc.identifier.uri | http://hdl.handle.net/1721.1/101028 | |
| dc.description.abstract | Given an image stream, our on-line algorithm will select the semantically-important images that summarize the visual experience of a mobile robot. Our approach consists of data pre-clustering using coresets followed by a graph based incremental clustering procedure using a topic based image representation. A coreset for an image stream is a set of representative images that semantically compresses the data corpus, in the sense that every frame has a similar representative image in the coreset. We prove that our algorithm efficiently computes the smallest possible coreset under natural well-defined similarity metric and up to provably small approximation factor. The output visual summary is computed via a hierarchical tree of coresets for different parts of the image stream. This allows multi-resolution summarization (or a video summary of specified duration) in the batch setting and a memory-efficient incremental summary for the streaming case. | en_US |
| dc.description.sponsorship | Singapore-MIT Alliance for Research and Technology Center (Future Urban Mobility Project) | en_US |
| dc.description.sponsorship | Foxconn International Holdings Ltd. | en_US |
| dc.description.sponsorship | Singapore. National Research Foundation | en_US |
| dc.description.sponsorship | United States. Office of Naval Research. Multidisciplinary University Research Initiative (Grant N00014-09-1-1051) | en_US |
| dc.description.sponsorship | United States. Office of Naval Research. Multidisciplinary University Research Initiative (Grant N00014-09-1-1031) | en_US |
| dc.description.sponsorship | National Science Foundation (U.S.) (Award IIS-1117178) | en_US |
| dc.language.iso | en_US | |
| dc.relation.isversionof | http://dx.doi.org/10.1109/ICRA.2014.6907021 | en_US |
| dc.rights | Creative Commons Attribution-Noncommercial-Share Alike | en_US |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | en_US |
| dc.source | MIT web domain | en_US |
| dc.title | Visual precis generation using coresets | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Paul, Rohan, Dan Feldman, Daniela Rus, and Paul Newman. “Visual Precis Generation Using Coresets.” 2014 IEEE International Conference on Robotics and Automation (ICRA) (May 2014). | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory | en_US |
| dc.contributor.mitauthor | Rus, Daniela L. | en_US |
| dc.contributor.mitauthor | Feldman, Dan | en_US |
| dc.relation.journal | Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA) | en_US |
| dc.eprint.version | Author's final manuscript | en_US |
| dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
| eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
| dspace.orderedauthors | Paul, Rohan; Feldman, Dan; Rus, Daniela; Newman, Paul | en_US |
| dc.identifier.orcid | https://orcid.org/0000-0001-5473-3566 | |
| mit.license | OPEN_ACCESS_POLICY | en_US |