PAC-Learning Prolog Clauses With or Without Errors
Author(s)
Gennaro, Rosario
DownloadMIT-LCS-TM-500.pdf (1.691Mb)
Metadata
Show full item recordAbstract
Recently researchers have been interested in trying to expand the  domain of learnability to subsets of first-order logic,  in particular Prolog programs. This new research area has been named  Inductive Logic Programming (ILP).   In a nutshell we can describe a generic ILP problem as following: given  a set E of (positive and negative) examples of a target predicate,  and some background knowledge B about the world (usually a logic  program including facts and auxiliary predicates), the task is to  find a logic program H (our hypothesis) such that all positive  examples can be deduced from B and H, while no negative  example can.   In this paper we review some of the results achieved in this area  and discuss the techniques used. Moreover we prove the following new results:   (1)  Predicates described by non-recursive, local clauses of at        most k literals are PAC-learnable under any distribution.       This generalizes a previous result that was valid only        for constrained clauses.   (2)  Predicates that are described by k non-recursive local        clauses are PAC-learnable under any distribution.        This generalizes a previous result that was non constructive        and valid only under some class of distributions.   Finally we introduce what we believe is the first theoretical framework  for learning Prolog clauses in the presence of errors.  To this purpose we introduce a new noise model, that we call  the fixed attribute noise model, for learning propositional  concepts over the Boolean domain. This new noise model  can be of its own interest.
Date issued
1994-02Series/Report no.
MIT-LCS-TM-500