dc.contributor.author | Jaakkola, Tommi S. (Tommi Sakari) | en_US |
dc.coverage.temporal | Fall 2002 | en_US |
dc.date.issued | 2002-12 | |
dc.identifier | 6.867-Fall2002 | |
dc.identifier | local: 6.867 | |
dc.identifier | local: IMSCP-MD5-fc4a8cca30f08eb69460c2f81695d7e8 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/46320 | |
dc.description.abstract | Principles, techniques, and algorithms in machine learning from the point of view of statistical inference; representation, generalization, and model selection; and methods such as linear/additive models, active learning, boosting, support vector machines, hidden Markov models, and Bayesian networks. From the course home page: Course Description 6.867 is an introductory course on machine learning which provides an overview of many techniques and algorithms in machine learning, beginning with topics such as simple perceptrons and ending up with more recent topics such as boosting, support vector machines, hidden Markov models, and Bayesian networks. The course gives the student the basic ideas and intuition behind modern machine learning methods as well as a bit more formal understanding of how and why they work. The underlying theme in the course is statistical inference as this provides the foundation for most of the methods covered. | en_US |
dc.language | en-US | en_US |
dc.rights.uri | Usage Restrictions: This site (c) Massachusetts Institute of Technology 2003. Content within individual courses is (c) by the individual authors unless otherwise noted. The Massachusetts Institute of Technology is providing this Work (as defined below) under the terms of this Creative Commons public license ("CCPL" or "license"). The Work is protected by copyright and/or other applicable law. Any use of the work other than as authorized under this license is prohibited. By exercising any of the rights to the Work provided here, You (as defined below) accept and agree to be bound by the terms of this license. The Licensor, the Massachusetts Institute of Technology, grants You the rights contained here in consideration of Your acceptance of such terms and conditions. | en_US |
dc.subject | machine learning | en_US |
dc.subject | perceptrons | en_US |
dc.subject | boosting | en_US |
dc.subject | support vector machines | en_US |
dc.subject | Markov | en_US |
dc.subject | hidden Markov models | en_US |
dc.subject | HMM | en_US |
dc.subject | Bayesian networks | en_US |
dc.subject | statistical inference | en_US |
dc.subject | regression | en_US |
dc.subject | clustering | en_US |
dc.subject | bias | en_US |
dc.subject | variance | en_US |
dc.subject | regularization | en_US |
dc.subject | Generalized Linear Models | en_US |
dc.subject | neural networks | en_US |
dc.subject | Support Vector Machine | en_US |
dc.subject | SVM | en_US |
dc.subject | mixture models | en_US |
dc.subject | kernel density estimation | en_US |
dc.subject | gradient descent | en_US |
dc.subject | quadratic programming | en_US |
dc.subject | EM algorithm | en_US |
dc.subject | orward-backward algorithm | en_US |
dc.subject | junction tree algorithm | en_US |
dc.subject | Gibbs sampling | en_US |
dc.subject | Machine learning | en_US |
dc.title | 6.867 Machine Learning, Fall 2002 | en_US |
dc.title.alternative | Machine Learning | en_US |