Estimation of state-transition probability matrices in asynchronous population Markov processes
Author(s)
Farahat, Waleed A.; Asada, Harry
DownloadAsada_Estimation of state-transition.pdf (1.392Mb)
PUBLISHER_POLICY
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
We address the problem of estimating the probability transition matrix of an asynchronous vector Markov process from aggregate (longitudinal) population observations. This problem is motivated by estimating phenotypic state transitions probabilities in populations of biological cells, but can be extended to multiple contexts of populations of Markovian agents. We adopt a Bayesian estimation approach, which can be computationally expensive if exact marginalization is employed. To compute the posterior estimates efficiently, we use Monte Carlo simulations coupled with Gibb's sampling techniques that explicitly incorporate sampling constraints from the desired distributions. Such sampling techniques can attain significant computational advantages. Illustration of the algorithm is provided via simulation examples.
Date issued
2010-07Department
Massachusetts Institute of Technology. Department of Mechanical EngineeringJournal
Proceedings of the American Control Conference (ACC), 2010
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Farahat, Waleed A. & Asada, H. "Estimation of state-transition probability matrices in asynchronous population Markov processes." American Control Conference (ACC) 2010 (2010): 6519-6524. © 2010 IEEE
Version: Final published version
ISBN
978-1-4244-7426-4
ISSN
0743-1619