Show simple item record

dc.contributor.authorIba, Glenn A.en_US
dc.date.accessioned2004-10-04T14:51:10Z
dc.date.available2004-10-04T14:51:10Z
dc.date.issued1979-09-01en_US
dc.identifier.otherAIM-548en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/6325
dc.description.abstractThis work proposes a theory for machine learning of disjunctive concepts. The paradigm followed is one of teaching and testing, where the teaching is accomplished by presenting a sequence of positive and negative examples of the target concept. The core of the theory has been implemented and tested as computer programs. The theory addresses the problem of deciding when it is appropriate to merge descriptions and when it is appropriate to form a disjunctive split. The approach outlined has the advantage that it allows recovery from over generalizations. It is observed that negative examples play an important role in the decision making process, as well as in detecting over generalizations and instigating recovery. Because of the ability to recover from over generalizations when they occur, the system is less sensitive to the ordering of the training sequence than other systems. The theory is presented in a domain and representation independent format. A few conditions are presented, which abstract the assumptions made about any representation scheme that is to be employed within the theory. The work is illustrated in several different domains, illustrating the generality and flexibility of the theory.en_US
dc.format.extent15147837 bytes
dc.format.extent12035000 bytes
dc.format.mimetypeapplication/postscript
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.relation.ispartofseriesAIM-548en_US
dc.titleLearning Disjunctive Concepts From Examplesen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record