Show simple item record

dc.contributor.advisorLeslie P. Kaelbling.en_US
dc.contributor.authorKim, Hyun Soo, M. Eng. Massachusetts Institute of Technologyen_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2011-02-23T14:42:05Z
dc.date.available2011-02-23T14:42:05Z
dc.date.copyright2010en_US
dc.date.issued2010en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/61287
dc.descriptionThesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (p. 99-100).en_US
dc.description.abstractHidden Markov Models (HMMs) are ubiquitously used in applications such as speech recognition and gene prediction that involve inferring latent variables given observations. For the past few decades, the predominant technique used to infer these hidden variables has been the Baum-Welch algorithm. This thesis utilizes insights from two related fields. The first insight is from Angluin's seminal paper on learning regular sets from queries and counterexamples, which produces a simple and intuitive algorithm that efficiently learns deterministic finite automata. The second insight follows from a careful analysis of the representation of HMMs as matrices and realizing that matrices hold deeper meaning than simply entities used to represent the HMMs. This thesis takes Angluin's approach and nonnegative matrix factorization and applies them to learning HMMs. Angluin's approach fails and the reasons are discussed. The matrix factorization approach is successful, allowing us to produce a novel method of learning HMMs. The new method is combined with Baum-Welch into a hybrid algorithm. We evaluate the algorithm by comparing its performance in learning selected HMMs to the Baum-Welch algorithm. We empirically show that our algorithm is able to perform better than the Baum-Welch algorithm for HMMs with at most six states that have dense output and transition matrices. For these HMMs, our algorithm is shown to perform 22.65% better on average by the Kullback-Liebler measure.en_US
dc.description.statementofresponsibilityby Hyun Soo Kim.en_US
dc.format.extent100 p.en_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleTwo new approaches for learning Hidden Markov Modelsen_US
dc.title.alternative2 new approaches for learning HMMsen_US
dc.typeThesisen_US
dc.description.degreeM.Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc702644273en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record