background
logo
ArxivPaperAI

BAIT: Benchmarking (Embedding) Architectures for Interactive Theorem-Proving

Author:
Sean Lamont, Michael Norrish, Amir Dezfouli, Christian Walder, Paul Montague
Keyword:
Computer Science, Artificial Intelligence, Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Logic in Computer Science (cs.LO)
journal:
--
date:
2024-03-06 00:00:00
Abstract
Artificial Intelligence for Theorem Proving has given rise to a plethora of benchmarks and methodologies, particularly in Interactive Theorem Proving (ITP). Research in the area is fragmented, with a diverse set of approaches being spread across several ITP systems. This presents a significant challenge to the comparison of methods, which are often complex and difficult to replicate. Addressing this, we present BAIT, a framework for fair and streamlined comparison of learning approaches in ITP. We demonstrate BAIT's capabilities with an in-depth comparison, across several ITP benchmarks, of state-of-the-art architectures applied to the problem of formula embedding. We find that Structure Aware Transformers perform particularly well, improving on techniques associated with the original problem sets. BAIT also allows us to assess the end-to-end proving performance of systems built on interactive environments. This unified perspective reveals a novel end-to-end system that improves on prior work. We also provide a qualitative analysis, illustrating that improved performance is associated with more semantically-aware embeddings. By streamlining the implementation and comparison of Machine Learning algorithms in the ITP context, we anticipate BAIT will be a springboard for future research.
PDF: BAIT: Benchmarking (Embedding) Architectures for Interactive Theorem-Proving.pdf
Empowered by ChatGPT