Show simple item record

dc.contributor.authorHoward, Thomas
dc.contributor.authorStump, Ethan
dc.contributor.authorFink, Jonathan
dc.contributor.authorArkin, Jacob
dc.contributor.authorPaul, Rohan
dc.contributor.authorPark, Daehyung
dc.contributor.authorRoy, Subhro
dc.contributor.authorBarber, Daniel
dc.contributor.authorBendell, Rhyse
dc.contributor.authorSchmeckpeper, Karl
dc.contributor.authorTian, Junjiao
dc.contributor.authorOh, Jean
dc.contributor.authorWigness, Maggie
dc.contributor.authorQuang, Long
dc.contributor.authorRothrock, Brandon
dc.contributor.authorNash, Jeremy
dc.contributor.authorWalter, Matthew
dc.contributor.authorJentsch, Florian
dc.contributor.authorRoy, Nicholas
dc.date.accessioned2022-09-20T16:56:23Z
dc.date.available2022-09-20T16:56:23Z
dc.date.issued2022
dc.identifier.urihttps://hdl.handle.net/1721.1/145529
dc.description.abstract<jats:p>For humans and robots to collaborate effectively as teammates in unstructured environments, robots must be able to construct semantically rich models of the environment, communicate efficiently with teammates, and perform sequences of tasks robustly with minimal human intervention, as direct human guidance may be infrequent and/or intermittent. Contemporary architectures for human-robot interaction often rely on engineered human-interface devices or structured languages that require extensive prior training and inherently limit the kinds of information that humans and robots can communicate. Natural language, particularly when situated with a visual representation of the robot’s environment, allows humans and robots to exchange information about abstract goals, specific actions, and/or properties of the environment quickly and effectively. In addition, it serves as a mechanism to resolve inconsistencies in the mental models of the environment across the human-robot team. This article details a novel intelligence architecture that exploits a centralized representation of the environment to perform complex tasks in unstructured environments. The centralized environment model is informed by a visual perception pipeline, declarative knowledge, deliberate interactive estimation, and a multimodal interface. The language pipeline also exploits proactive symbol grounding to resolve uncertainty in ambiguous statements through inverse semantics. A series of experiments on three different, unmanned ground vehicles demonstrates the utility of this architecture through its robust ability to perform language-guided spatial navigation, mobile manipulation, and bidirectional communication with human operators. Experimental results give examples of component-level behaviors and overall system performance that guide a discussion on observed performance and opportunities for future innovation.</jats:p>en_US
dc.language.isoen
dc.publisherField Robotics Publication Societyen_US
dc.relation.isversionof10.55417/FR.2022017en_US
dc.rightsCreative Commons Attribution 4.0 International licenseen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceField Roboticsen_US
dc.titleAn Intelligence Architecture for Grounded Language Communication with Field Robotsen_US
dc.typeArticleen_US
dc.identifier.citationHoward, Thomas, Stump, Ethan, Fink, Jonathan, Arkin, Jacob, Paul, Rohan et al. 2022. "An Intelligence Architecture for Grounded Language Communication with Field Robots." Field Robotics, 2 (1).
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronauticsen_US
dc.relation.journalField Roboticsen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2022-09-20T16:49:56Z
dspace.orderedauthorsHoward, T; Stump, E; Fink, J; Arkin, J; Paul, R; Park, D; Roy, S; Barber, D; Bendell, R; Schmeckpeper, K; Tian, J; Oh, J; Wigness, M; Quang, L; Rothrock, B; Nash, J; Walter, M; Jentsch, F; Roy, Nen_US
dspace.date.submission2022-09-20T16:50:12Z
mit.journal.volume2en_US
mit.journal.issue1en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record