| dc.contributor.author | Narasimhan, Karthik Rajagopal | |
| dc.contributor.author | Yala, Adam | |
| dc.contributor.author | Barzilay, Regina | |
| dc.date.accessioned | 2016-11-16T21:30:54Z | |
| dc.date.available | 2016-11-16T21:30:54Z | |
| dc.date.issued | 2016-11 | |
| dc.identifier.uri | http://hdl.handle.net/1721.1/105337 | |
| dc.description.abstract | Most successful information extraction systems operate with access to a large collection of documents. In this work, we explore the task of acquiring and incorporating external evidence to improve extraction accuracy in domains where the amount of training data is scarce. This process entails issuing search queries, extraction from new sources and reconciliation of extracted values, which are repeated until sufficient evidence is collected. We approach the problem using a reinforcement learning framework where our model learns to select optimal actions based on contextual information. We employ a deep Qnetwork, trained to optimize a reward function that reflects extraction accuracy while penalizing extra effort. Our experiments on two databases--of shooting incidents, and food adulteration cases--demonstrate that our system significantly outperforms traditional extractors and a competitive meta-classifier baseline. | en_US |
| dc.description.sponsorship | Google (Firm) (Google Research Faculty Award) | en_US |
| dc.language.iso | en_US | |
| dc.publisher | Association for Computational Linguistics (ACL) | en_US |
| dc.relation.isversionof | http://www.emnlp2016.net/accepted-papers.html | en_US |
| dc.rights | Creative Commons Attribution-Noncommercial-Share Alike | en_US |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | en_US |
| dc.source | Narasimhan | en_US |
| dc.title | Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Narasimhan, Karthik, Adam Yala, and Regina Barzilay. "Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning." In EMNLP 2016: Conference on Empirical Methods in Natural Language Processing, November 1-5, 2016, Austin, Texas, USA. | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
| dc.contributor.approver | Narasimhan, Karthik Rajagopal | en_US |
| dc.contributor.mitauthor | Narasimhan, Karthik Rajagopal | |
| dc.contributor.mitauthor | Yala, Adam | |
| dc.contributor.mitauthor | Barzilay, Regina | |
| dc.relation.journal | Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP 2016 | en_US |
| dc.eprint.version | Author's final manuscript | en_US |
| dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
| eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
| dspace.orderedauthors | Narasimhan, Karthik; Yala, Adam; Barzilay, Regina | en_US |
| dspace.embargo.terms | N | en_US |
| dc.identifier.orcid | https://orcid.org/0000-0001-9894-9983 | |
| dc.identifier.orcid | https://orcid.org/0000-0002-2921-8201 | |
| mit.license | OPEN_ACCESS_POLICY | en_US |