Show simple item record

dc.contributor.advisorJames R. Glass.en_US
dc.contributor.authorBelinkov, Yonatanen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2018-09-17T15:56:34Z
dc.date.available2018-09-17T15:56:34Z
dc.date.copyright2018en_US
dc.date.issued2018en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/118079
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 183-228).en_US
dc.description.abstractLanguage technology has become pervasive in everyday life. Neural networks are a key component in this technology thanks to their ability to model large amounts of data. Contrary to traditional systems, models based on deep neural networks (a.k.a. deep learning) can be trained in an end-to-end fashion on input-output pairs, such as a sentence in one language and its translation in another language, or a speech utterance and its transcription. The end-to-end training paradigm simplifies the engineering process while giving the model flexibility to optimize for the desired task. This, however, often comes at the expense of model interpretability: understanding the role of different parts of the deep neural network is difficult, and such models are sometimes perceived as "black-box", hindering research efforts and limiting their utility to society. This thesis investigates what kind of linguistic information is represented in deep learning models for written and spoken language. In order to study this question, I develop a unified methodology for evaluating internal representations in neural networks, consisting of three steps: training a model on a complex end-to-end task; generating feature representations from different parts of the trained model; and training classifiers on simple supervised learning tasks using the representations. I demonstrate the approach on two core tasks in human language technology: machine translation and speech recognition. I perform a battery of experiments comparing different layers, modules, and architectures in end-to-end models that are trained on these tasks, and evaluate their quality at different linguistic levels. First, I study how neural machine translation models learn morphological information. Second, I compare lexical semantic and part-of-speech information in neural machine translation. Third, I investigate where syntactic and semantic structures are captured in these models. Finally, I explore how end-to-end automatic speech recognition models encode phonetic information. The analyses illuminate the inner workings of end-to-end machine translation and speech recognition systems, explain how they capture different language properties, and suggest potential directions for improving them. I also point to open questions concerning the representation of other linguistic properties, the investigation of different models, and the use of other analysis methods. Taken together, this thesis provides a comprehensive analysis of internal language representations in deep learning models.en_US
dc.description.statementofresponsibilityby Yonatan Belinkov.en_US
dc.format.extent228 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleOn internal language representations in deep learning : an analysis of machine translation and speech recognitionen_US
dc.title.alternativeAnalysis of machine translation and speech recognitionen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc1051773125en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record