Steps towards proof construction using reinforcement learning : environments and models for hypothesis-posing as subtask creation
Author(s)
Guo, Hairuo.
Download1145019864-MIT.pdf (446.0Kb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Tomaso Poggio.
Terms of use
Metadata
Show full item recordAbstract
Despite recent advances in reinforcement learning (RL) that have allowed AI algorithms to master games such as Go from scratch, scant progress has been made on applying RL to one of the first tasks seen as susceptible to automation: theorem proving. I present steps towards training agents to construct proofs through utilizing the ability to pose hypotheses as a way to uncover information and break tasks down into subtasks. To do so, I create a novel bitstring problem that retains many of the challenges posed by proof construction while dispensing with the need to parse grammars. I then assess the performance of well-known RL algorithms on tasks derived from this problem, demonstrating that it is non-trivial. Finally, I alter a model that successfully learns one of the bitstring tasks in order to acquire results on possible mechanisms for theorem-proving prototypes.
Description
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 37-38).
Date issued
2019Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.