dc.contributor.advisor | Henry A. Lieberman. | en_US |
dc.contributor.author | Smith, Dustin Arthur | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Architecture. Program in Media Arts and Sciences. | en_US |
dc.date.accessioned | 2014-11-24T18:40:24Z | |
dc.date.available | 2014-11-24T18:40:24Z | |
dc.date.copyright | 2013 | en_US |
dc.date.issued | 2013 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/91857 | |
dc.description | Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2013. | en_US |
dc.description | Cataloged from PDF version of thesis. | en_US |
dc.description | Includes bibliographical references (pages [105]-111). | en_US |
dc.description.abstract | Referring expressions with vague and ambiguous modifiers, such as "a quick visit" and "the big meeting," are difficult for computers to interpret because their meanings are defined in part by context. For the hearer to arrive at the speaker's intended meaning, he must consider the alternative decisions that the speaker was faced with in context. To address these challenges, I propose a new approach to both generating and interpreting referring expressions based on belief-state planning and plan recognition. Planning in belief space offers a way to capture referential uncertainty and the incremental nature of generating and interpretation, because each belief state represents a complete interpretation. The contributions of my thesis are as follows: (1) A computational model of reference generation and interpretation that is fast, incremental, and non-deterministic. This model includes a lexical semantics for a fragment of English noun phrases, which specifies the encoded meanings of determiners (quantifiers and articles), gradable and ambiguous modifiers. It performs in real time, even when the hypothesis space grows very large. Because it's incremental, it avoids considering possibilities that will later turn out to be irrelevant. (2) The integration of generation and interpretation into a single process. Interpretation is guided by comparison to alternatives produced by the generation module. When faced with an underspecified description, the system uses what it could have said and compares that to what the user did say. Reasoning about alternative decisions facilitates inferences of this sort: "She ate some of the tuna" means not all of it, otherwise you would have said, "She ate the tuna." This approach has been implemented and evaluated using a computational model, AIGRE. I also created a testbed for comparing human judgments of referring expressions to those produced by our algorithm (or others). In an online user experiment with Mechanical Turk, we attained 94% coverage of human responses in a simple geometrical domain, as well as lower, but still encouraging, coverage in a more complex, real-world domain. The model, AIGRE, demonstrates that managing the vagueness and ambiguity in natural language, while still not easy, is nevertheless possible. The day where we will routinely talk to our computers in unconstrained natural language is not far off. | en_US |
dc.description.statementofresponsibility | by Dustin Arthur Smith. | en_US |
dc.format.extent | 111 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Architecture. Program in Media Arts and Sciences. | en_US |
dc.title | Generating and interpreting referring expressions in context | en_US |
dc.type | Thesis | en_US |
dc.description.degree | Ph. D. | en_US |
dc.contributor.department | Program in Media Arts and Sciences (Massachusetts Institute of Technology) | |
dc.identifier.oclc | 894352854 | en_US |