31st International Conference on Tools and Algorithms for Construction and Analysis of Systems (TACAS) 2025
Description
This page contains supplementary material for our TACAS 2025 submission "Automated Analysis of Logically Constrained Rewrite Systems using crest". General details about crest can be found in the mentioned paper submission or the main website (be aware that the material there might be different).
Binaries/Benchmarks/Source Code used for TACAS 2025
Below you find the Linux binaries of crest used for the TACAS 2025 experiments (for other operating system binaries contact an author or use the available source code to compile it on your own). A requirement to use the tool is a Z3 binary in your path. Here "*-cr" denotes the confluence tool and "*-sn" the termination tool. For more information consult the usage information of the binaries or the README in the source code. How to reproduce the experiments on you own will be described in the submitted artifact.- crest-cr Linux Usage
- crest-sn Linux Usage
- prototype-cade Linux Usage
- paper examples examples.tar.gz
- LCTRS benchmarks ari-cops-lctrss.tar.gz
- TRS benchmarks ari-cops-trss.tar.gz
- source code crest-0.8.7.0.tar.gz
Experiments
Below you find the experimental data for all the tables in the submission. We have used the benchexec framework to perform all the experiments with a timeout of 60 seconds and 4 cores including 8GB of memory for each problem. After clicking on the links to the tables below, there are different tabs available. The important ones are "Summary" and "Table". The former states interesting information about the configuration including the runtimes at the bottom while the latter contains the results of each tool for each problem. By applying a filter (e.g. to true) the number of different solved problems is depicted in the upper right corner. The result "true" corresponds to a YES, "false" to a NO and all others to MAYBE. All proofs and examples are clickable. In some cases you need to hover with your mouse over the heading of a column to reveal the full information about which method and configuration was used.-
Table 1:
Experiments on the examples discussed in the paper. We compare crest with CRaris, Ctrl and the prototype which was used for the experiments of CADE 2023. -
Table 2:
Experiments of each individual confluence method in crest on the LCTRS benchmarks of the ARI database. The tables are ordered and the leftmost column corresponds to the topmost entry in the table of the paper. -
Table 3:
Confluence experiments on the LCTRS benchmarks of the ARI database. We compare crest with CRaris, Ctrl and the prototype which was used for the experiments of CADE 2023. -
Table 4:
Experiments of each individual termination method in crest on the LCTRS benchmarks of the ARI database. The tables are ordered and the leftmost column corresponds to the topmost entry in the table of the paper. -
Table 5:
Termination experiments on the LCTRS benchmarks of the ARI database. We compare crest with Cora, Ctrl and a special binary of CRaris which proves termination of LCTRSs given to us by Naoki Nishida. -
Table 6:
Confluence experiments on the TRS benchmarks of the ARI database. We compare crest with the state-of-the-art confluence proving tool for TRSs CSI. -
crest 10 seconds timeout:
This experiment shows that crest obtains within 10 second timeouts for each benchmark the same results as for 60 second timeouts on the LCTRS benchmarks.