dc.contributor.advisor | James Glass. | en_US |
dc.contributor.author | Luo, Hongyin. | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. | en_US |
dc.date.accessioned | 2019-11-04T20:22:52Z | |
dc.date.available | 2019-11-04T20:22:52Z | |
dc.date.copyright | 2019 | en_US |
dc.date.issued | 2019 | en_US |
dc.identifier.uri | https://hdl.handle.net/1721.1/122760 | |
dc.description | Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 | en_US |
dc.description | Cataloged from PDF version of thesis. | en_US |
dc.description | Includes bibliographical references (pages 85-92). | en_US |
dc.description.abstract | In this thesis, we explore the use of neural attention mechanisms for improving natural language representation learning, a fundamental concept for modern natural language processing. With the proposed attention algorithms, our model made significant improvements in both language modeling and natural language understanding tasks. We regard language modeling as a representation learning task that learns to align local word contexts and their following words. We explore the use of attention mechanisms for both the context and following words to improve the performance of language models, and measure perplexity improvements on classic language modeling tasks. To learn better representation of contexts, we use a self-attention mechanism with a convolutional neural network (CNN) to simulate long short-term memory networks (LSTMs). The model process sequential data in parallel and still achieves competitive performances. We also propose a phrase induction model and headword attention to learn the embedding of following phrases. The model is able to learn reasonable phrase segments and outperforms several state-of-the-art language models on different data sets. The approach outperformed AWD-LSTM model by reducing 2 perplexities on the Penn Treebank and Wikitext-2 data sets, and achieved new state-of-the-art performance on the Wikitext-103 data set with 17.4 perplexity. For language understanding tasks, we propose the use of a self-attention CNN for video question answering. The performance of this model is significantly higher than the baseline video retrieval engine. Finally, we also investigate an end-to-end co-reference resolution model by applying cross-sentence attentions to utilize knowledge in contextual data and learn better contextualized word and span embeddings. The model achieved 66.69% MAP[at]1, and 87.42% MAP[at]5 accuracy of video retrieval and 57.13% MAP[at]1, 80.75 MAP[at]5 accuracy of a moment detection task, significantly outperforming the baselines. | en_US |
dc.description.sponsorship | The study is partly supported by Ford Motor Company | en_US |
dc.description.statementofresponsibility | by Hongyin Luo. | en_US |
dc.format.extent | 92 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Electrical Engineering and Computer Science. | en_US |
dc.title | Neural attentions for natural language understanding and modeling | en_US |
dc.type | Thesis | en_US |
dc.description.degree | S.M. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
dc.identifier.oclc | 1124925471 | en_US |
dc.description.collection | S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science | en_US |
dspace.imported | 2019-11-04T20:22:51Z | en_US |
mit.thesis.degree | Master | en_US |
mit.thesis.department | EECS | en_US |