Show simple item record

dc.contributor.authorJordan, Michael I.en_US
dc.contributor.authorJacobs, Robert A.en_US
dc.date.accessioned2004-10-20T20:49:48Z
dc.date.available2004-10-20T20:49:48Z
dc.date.issued1993-08-01en_US
dc.identifier.otherAIM-1440en_US
dc.identifier.otherCBCL-083en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/7206
dc.description.abstractWe present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.en_US
dc.format.extent29 p.en_US
dc.format.extent190144 bytes
dc.format.extent678911 bytes
dc.format.mimetypeapplication/octet-stream
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.relation.ispartofseriesAIM-1440en_US
dc.relation.ispartofseriesCBCL-083en_US
dc.subjectsupervised learningen_US
dc.subjectstatisticsen_US
dc.subjectdecision treesen_US
dc.subjectneuralsnetworksen_US
dc.titleHierarchical Mixtures of Experts and the EM Algorithmen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record