Factorial Hidden Markov Models
Author(s)
Ghahramani, Zoubin; Jordan, Michael I.
DownloadAIM-1561.ps (193.7Kb)
Additional downloads
Metadata
Show full item recordAbstract
We present a framework for learning in hidden Markov models with distributed state representations. Within this framework, we derive a learning algorithm based on the Expectation--Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules, the M-step of our algorithm is exact and can be solved analytically. However, due to the combinatorial nature of the hidden state representation, the exact E-step is intractable. A simple and tractable mean field approximation is derived. Empirical results on a set of problems suggest that both the mean field approximation and Gibbs sampling are viable alternatives to the computationally expensive exact algorithm.
Date issued
1996-02-09Other identifiers
AIM-1561
CBCL-130
Series/Report no.
AIM-1561CBCL-130
Keywords
AI, MIT, Artificial Intelligence, Hidden Markov Models, sNeural networks, Time series, Mean field theory, Gibbs sampling, sFactorial, Learning algorithms, Machine learning