MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Linguistically motivated models for lightly-supervised dependency parsing

Author(s)
Naseem, Tahira
Thumbnail
DownloadFull printable version (6.949Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Regina Barzilay.
Terms of use
M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Today, the top performing parsing algorithms rely on the availability of annotated data for learning the syntactic structure of a language. Unfortunately, syntactically annotated texts are available only for a handful of languages. The research presented in this thesis aims at developing parsing models that can effectively perform in a lightly-supervised training regime. In particular we focus on formulating linguistically aware models of dependency parsing that can exploit readily available sources of linguistic knowledge such as language universals and typological features. This type of linguistic knowledge can be used to motivate model design and/or to guide inference procedure. We propose three alternative approaches for incorporating linguistic information into a lightly-supervised training setup: First, we show that linguistic information can be used in the form of rules on top of standard unsupervised parsing models to guide inference procedure. This method consistently outperforms existing monolingual and multilingual unsupervised parsers when tested on a set of 6 Indo-European languages. Next, we show that a linguistically aware model design greatly facilitates crosslingual parser transfer by leveraging syntactic connections between languages. Our transfer approach outperforms the state-of-the-art multilingual transfer parser across a set of 19 languages, achieving an average gain of 5.9%. The gains are even more pronounced - 14.4% - on non-Indo-European languages where existing transfer methods fail to perform. Finally, we propose a corpus-level Bayesian framework that allows multiple views of data in a single model. We use this framework to combine a dependency model with constituency view and universal rules, achieving a performance gain of 1.9% compared to the top-performing unsupervised parsing model.
Description
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
 
Cataloged from PDF version of thesis.
 
Includes bibliographical references (pages 122-132).
 
Date issued
2014
URI
http://hdl.handle.net/1721.1/89995
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.