# Proof Marathon with Tons of Inductive Problems (TIP):
In this project you will attack Tons of Inductive Problems (TIP) in
Isabelle/HOL.
TIP is a set of self-contained cross-prover problems for inductive theorem
proving developed by Claessen et al. [1]. The goal of proof marathon is to
mechanically prove 30 problems in TIP15 while assessing their difficulties and
searching for commonly appearing proof patterns. The Isabelle version of this
problem set is available from github [2].
[1] https://doi.org/10.1007/978-3-319-20615-8_23
[2] https://github.com/data61/PSL/tree/MiLkMaId/MiLkMaId/TIP_with_Proof/TIP15/TIP15
# Algorithmic Conjecturing for Isabelle/HOL:
In this project you will develop an ML library for producing conjectures related
to given proof goals in Isabelle/HOL.
When developing large formal proofs, often it is necessary to conjecture
auxiliary lemmas that are useful to discharge the final proof goal. PGT is an
award winning mechanism to identify useful conjectures in a goal oriented manner
implemented for Isabelle/HOL [1]. PGT currently supports only two algorithms to
produce conjectures. Your task is to identify other algorithms to produce
conjectures and implement them in the framework of PGT.
Note that this project mostly focuses on ML programming in Isabelle rather than
interactive theorem proving.
[1] https://doi.org/10.1007/978-3-319-96812-4_19 (The pre-print is available at arXiv.)
# Multi-Output Regression Tree Construction with Cost-Complexity Pruning in Isabelle/ML:
In this project you will develop an Isabelle/ML library to construct regression
trees for multi-output problems using cost-complexity pruning [1].
After implementing cost-complexity pruning, you will conduct a cross-evaluation
on a data-set provided by the lecturer. Finally, you will compare the prediction
accuracy of your implementation against regression tree constructions with fixed
heights.
Note that this project mostly focuses on ML programming in Isabelle rather than
interactive theorem proving. The data-set provided by the lecturer is used for
proof method recommendation in Isabelle/HOL. For more details, see our paper on
proof method recommendation [2].
[1] https://www-bcf.usc.edu/~gareth/ISL/
[2] https://doi.org/10.1145/3238147.3238210 (The pre-print is available at arXiv.)
# Random Forests in Isabelle/ML:
In this project you will develop an Isabelle/ML library that constructs random
forests for multi-output regression problems [1].
After implementing the random forest construction algorithm, you will conduct a
cross-evaluation on a data-set provided by the lecturer. Finally, you will compare
the prediction accuracy of your implementation against regression tree
constructions with fixed heights.
Note that this project mostly focuses on ML programming in Isabelle rather than
interactive theorem proving. The data-set provided by a lecture is used for
proof method recommendation for Isabelle/HOL. For more details, see our paper on
proof method recommendation [2].
[1] https://www-bcf.usc.edu/~gareth/ISL/
[2] https://doi.org/10.1145/3238147.3238210 (The pre-print is available at arXiv.)
# A Lightweight Refutation Tool based on Proof Methods in Isabelle/HOL.
In this project you will build lightweight refutation commands in Isabelle/HOL.
Isabelle/HOL comes with multiple counter-example finders, such as Quickcheck and
Nitpick. Your challenge is to develop a new refutation tool exploiting existing
proof methods. Given a proof goal, your refutation tool negates the original
goal and applies proof methods to the negated goal. Your tool determines the
original goal is refutable when a proof method discharges the negated goal.
Unlike Quickcheck or Nitpick, your tool will not rely on code-generation or
external tools but will be based on proof methods that are native to
Isabelle/HOL.
Note that this project mostly focuses on ML programming in Isabelle rather than
interactive theorem proving.