System Description: E.T. 0.1
Cezary Kaliszyk, Stephan Schulz, Josef Urban, and Jiří VyskočilProceedings of the 25th International Conference on Automated Deduction (CADE-25), Lecture Notes in Artificial Intelligence 9195, pp. 389 – 398, 2015.
Abstract
E.T. 0.1 is a meta-system specialized for theorem proving over large first-order theories containing thousands of axioms. Its design is motivated by the recent theorem proving experiments over the Mizar, Flyspeck and Isabelle data-sets. Unlike other approaches, E.T. does not learn from related proofs, but assumes a situation where previous proofs are not available or hard to get. Instead, E.T. uses several layers of complementary methods and tools with different speed and precision that ultimately select small sets of the most promising axioms for a given conjecture. Such filtered problems are then passed to E, running a large number of suitable automatically invented theorem-proving strategies. On the large-theory Mizar problems, E.T. considerably outperforms E, Vampire, and any other prover that does not learn from related proofs. As a general ATP, E.T. improved over the performance of unmodified E in the combined FOF division of CASC 2014 by 6%.
BibTeX
@inproceedings{CKSSJUJV-CADE15, author = "Cezary Kaliszyk and Stephan Schulz and Josef Urban and Ji\v{r}\'i Vysko\v{c}il", title = "System Description: {E.T.} 0.1", booktitle = "Proceedings of the 25th International Conference on Automated Deduction (CADE-25)", editor = "Amy Felty and Aart Middeldorp", series = "Lecture Notes in Artificial Intelligence", volume = 9195, pages = "389--398", year = 2015, doi = "10.1007/978-3-319-21401-6_27" }