Learning to Prove Theorems via Interacting with Proof Assistants
International Conference on Machine Learning (ICML), June 2019
Abstract
Humans prove theorems by relying on substantial high-level reasoning and
problem-specific insights. Proof assistants offer a formalism that resembles
human mathematical reasoning, representing theorems in higher-order logic and
proofs as high-level tactics. However, human experts have to construct proofs
manually by entering tactics into the proof assistant. In this paper, we study
the problem of using machine learning to automate the interaction with proof
assistants. We construct CoqGym, a large-scale dataset and learning environment
containing 71K human-written proofs from 123 projects developed with the Coq
proof assistant. We develop ASTactic, a deep learning-based model that
generates tactics as programs in the form of abstract syntax trees (ASTs).
Experiments show that ASTactic trained on CoqGym can generate effective tactics
and can be used to prove new theorems not previously provable by automated
methods. Code is available at https://github.com/princeton-vl/CoqGym.
Citation
Kaiyu Yang and Jia Deng.
"Learning to Prove Theorems via Interacting with Proof Assistants."
International Conference on Machine Learning (ICML), June 2019.
BibTeX
@inproceedings{Yang:2019:LTP, author = "Kaiyu Yang and Jia Deng", title = "Learning to Prove Theorems via Interacting with Proof Assistants", booktitle = "International Conference on Machine Learning (ICML)", year = "2019", month = jun }