Show simple item record

dc.contributor.authorVelez, Javier J.
dc.contributor.authorHuang, Albert S.
dc.contributor.authorHemann, Garrett A.
dc.contributor.authorRoy, Nicholas
dc.contributor.authorPosner, Ingmar
dc.date.accessioned2012-12-14T16:15:18Z
dc.date.available2012-12-14T16:15:18Z
dc.date.issued2012-07
dc.date.submitted2011-10
dc.identifier.issn1943-5037
dc.identifier.issn1076-9757
dc.identifier.urihttp://hdl.handle.net/1721.1/75732
dc.description.abstractToday, mobile robots are expected to carry out increasingly complex tasks in multifarious, real-world environments. Often, the tasks require a certain semantic understanding of the workspace. Consider, for example, spoken instructions from a human collaborator referring to objects of interest; the robot must be able to accurately detect these objects to correctly understand the instructions. However, existing object detection, while competent, is not perfect. In particular, the performance of detection algorithms is commonly sensitive to the position of the sensor relative to the objects in the scene. This paper presents an online planning algorithm which learns an explicit model of the spatial dependence of object detection and generates plans which maximize the expected performance of the detection, and by extension the overall plan performance. Crucially, the learned sensor model incorporates spatial correlations between measurements, capturing the fact that successive measurements taken at the same or nearby locations are not independent. We show how this sensor model can be incorporated into an efficient forward search algorithm in the information space of detected objects, allowing the robot to generate motion plans efficiently. We investigate the performance of our approach by addressing the tasks of door and text detection in indoor environments and demonstrate significant improvement in detection performance during task execution over alternative methods in simulated and real robot experiments.en_US
dc.language.isoen_US
dc.publisherAI Access Foundationen_US
dc.relation.isversionofhttp://dx.doi.org/10.1613/jair.3516en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceAI Access Foundationen_US
dc.titleModelling Observation Correlations for Active Exploration and Robust Object Detectionen_US
dc.typeArticleen_US
dc.identifier.citationJ. Velez, G. Hemann, A. S. Huang, I. Posner and N. Roy (2012) Modelling Observation Correlations for Active Exploration and Robust Object Detection. © Copyright 2012 AI Access Foundation, Inc.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.mitauthorVelez, Javier J.
dc.contributor.mitauthorHuang, Albert S.
dc.contributor.mitauthorHemann, Garrett A.
dc.contributor.mitauthorRoy, Nicholas
dc.relation.journalJournal of Artificial Intelligence Researchen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.identifier.orcidhttps://orcid.org/0000-0002-8293-0492
dspace.mitauthor.errortrue
mit.licensePUBLISHER_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record