Show simple item record

dc.contributor.authorGhahramani, Zoubinen_US
dc.contributor.authorJordan, Michael I.en_US
dc.date.accessioned2004-10-20T20:49:14Z
dc.date.available2004-10-20T20:49:14Z
dc.date.issued1996-02-09en_US
dc.identifier.otherAIM-1561en_US
dc.identifier.otherCBCL-130en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/7188
dc.description.abstractWe present a framework for learning in hidden Markov models with distributed state representations. Within this framework, we derive a learning algorithm based on the Expectation--Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules, the M-step of our algorithm is exact and can be solved analytically. However, due to the combinatorial nature of the hidden state representation, the exact E-step is intractable. A simple and tractable mean field approximation is derived. Empirical results on a set of problems suggest that both the mean field approximation and Gibbs sampling are viable alternatives to the computationally expensive exact algorithm.en_US
dc.format.extent7 p.en_US
dc.format.extent198365 bytes
dc.format.extent244196 bytes
dc.format.mimetypeapplication/postscript
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.relation.ispartofseriesAIM-1561en_US
dc.relation.ispartofseriesCBCL-130en_US
dc.subjectAIen_US
dc.subjectMITen_US
dc.subjectArtificial Intelligenceen_US
dc.subjectHidden Markov Modelsen_US
dc.subjectsNeural networksen_US
dc.subjectTime seriesen_US
dc.subjectMean field theoryen_US
dc.subjectGibbs samplingen_US
dc.subjectsFactorialen_US
dc.subjectLearning algorithmsen_US
dc.subjectMachine learningen_US
dc.titleFactorial Hidden Markov Modelsen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record