Show simple item record

dc.contributor.advisorMichael I. Jordan.en_US
dc.contributor.authorNg, Andrew Y., 1976-en_US
dc.date.accessioned2005-08-19T19:14:59Z
dc.date.available2005-08-19T19:14:59Z
dc.date.copyright1998en_US
dc.date.issued1998en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/9658
dc.descriptionThesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.en_US
dc.descriptionIncludes bibliographical references (p. 55-57).en_US
dc.description.abstractWe consider feature selection for supervised machine learning in the "wrapper" model of feature selection. This typically involves an NP-hard optimization problem that is approximated by heuristic search for a "good" feature subset. First considering the idealization where this optimization is performed exactly, we give a rigorous bound for generalization error under feature selection. The search heuristics typically used are then immediately seen as trying to achieve the error given in our bounds, and succeeding to the extent that they succeed in solving the optimization. The bound suggests that, in the presence of many "irrelevant" features, the main somce of error in wrapper model feature selection is from "overfitting" hold-out or cross-validation data. This motivates a new algorithm that, again under the idealization of performing search exactly, has sample complexity ( and error) that grows logarithmically in the number of "irrelevant" features - which means it can tolerate having a number of "irrelevant" features exponential in the number of training examples - and search heuristics are again seen to be directly trying to reach this bound. Experimental results on a problem using simulated data show the new algorithm having much higher tolerance to irrelevant features than the standard wrapper model. Lastly, we also discuss ramifications that sample complexity logarithmic in the number of irrelevant features might have for feature design in actual applications of learning.en_US
dc.description.statementofresponsibilityby Andrew Y. Ng.en_US
dc.format.extent57 p.en_US
dc.format.extent5878396 bytes
dc.format.extent5878150 bytes
dc.format.mimetypeapplication/pdf
dc.format.mimetypeapplication/pdf
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582
dc.subjectElectrical Engineering and Computer Scienceen_US
dc.titleOn feature selection : learning with exponentially many irreverent features as training examplesen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc42427464en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record