Steps towards proof construction using reinforcement learning : environments and models for hypothesis-posing as subtask creation
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
MetadataShow full item record
Despite recent advances in reinforcement learning (RL) that have allowed AI algorithms to master games such as Go from scratch, scant progress has been made on applying RL to one of the first tasks seen as susceptible to automation: theorem proving. I present steps towards training agents to construct proofs through utilizing the ability to pose hypotheses as a way to uncover information and break tasks down into subtasks. To do so, I create a novel bitstring problem that retains many of the challenges posed by proof construction while dispensing with the need to parse grammars. I then assess the performance of well-known RL algorithms on tasks derived from this problem, demonstrating that it is non-trivial. Finally, I alter a model that successfully learns one of the bitstring tasks in order to acquire results on possible mechanisms for theorem-proving prototypes.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 37-38).
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
Electrical Engineering and Computer Science.