FISP Kick-Off 2016   

Abstracts

Andrea Aler Tubella Decomposition and cycles: isolating two complexity-generating mechanisms
We introduce a new methodology where by looking inside of atoms we are able to represent a wide variety of proof systems in such a way that every rule is an instance of a single, regular, linear rule scheme. We show the generality of this approach by presenting how it can be applied to several different systems with very different expressivity. We can use this characterisation of proof systems to obtain general local reduction rules for proofs. In that way, we obtain rewriting systems that allow us to decompose proofs in such a way as to isolate two different compression mechanisms: cuts and contractions. Furthermore, this decomposition procedure helps shed light on an intriguing open problem: whether certain cycles that can be observed in classical logic proofs contribute to proof compression or not.
Juan P. Aguilera tbc.
We consider models of bounded rationality based on the maximization of some binary relations that strengthen partial orders and axiomatize them.
Federico Aschieri On Gödel and Dummett Logics: Natural Deduction, Curry-Howard, Parallelism and Witness Extraction
It is about what I have presented at CL&C as well as my current work on developing analytic natural deduction and models of parall\ elism for intermediate logics.
Martin Avanzini Complexity Analysis by Polymorphic Sized Type Inference and Constraint Solving
properties of higher-order functional programs is based on sized types, and has been shown to be quite robust and amenable to automation. In sized types, a type carries not only information about the kind of each object, but also about its size, hence the name. This information is then exploited when requiring that recursive calls are done on arguments of strictly smaller size. Estimating the size of intermediate results is also a crucial aspect of complexity analysis. However, up to now, the only attempt of using sized types for complexity analysis is due to Vasconcelos, and confined to space complexity. The sized types system, by itself, does not guarantee any complexity-theoretic property on the typed program, except for the size of the output being bounded by a certain function on the size of the input. Complexity analysis of a program P can however be seen as a size analysis of another program tick(P) which computes not only P, but its complexity. In this talk, I will introduce sized types for programs, in the form of simply typed applicative term rewrite systems, together with the above mentioned ticking transformation. I will show that the type inference problem is (relatively) decidable by giving an algorithm which, given an applicative term rewrite system, produces in output a candidate type for the program, together with a set of integer index inequalities which need to be checked for satisfiability. These index inequalities can be understood as a first-order TRS, where checking for satisfiability amounts to finding a quasi-model of the given TRS.
Matthias Baaz Unsound inferences make proofs shorter
We give examples of calculi that extend Gentzen's sequent calculus LK in such a way that (i) derivations lead only to true sequents (ii) cut free are non-elementary shorter than LK proofs proofs therein. This is joint work with Juan. P. Aguilera.
Stéphane Gimenez Trying to analyze the complexity of strong reductions
We explain how to analyze time and space requirement of interaction-net programs for both sequential and parallel reductions. This methodology relies on potentials which depend on size and schedule variables that can be collected through type unification. We would like to extend the results which we obtained for higher-order programs in a restricted "weak" setting to more general reductions that offer more opportunities for parallelization. While size analysis (and most likely schedule analysis) can be performed inside a calculus-of-structures framework, the simple potential approach is not sufficient to deal with stronger reductions and needs to be refined.
Alessio Guglielmi Making several computation/normalisation mechanisms coexist
As is well known, we can interpret computationally any proof compression mechanism by unfolding, i.e., normalising, what is compressed. Normalisation generates proof complexity and one can hope that if the proof language is expressive and well designed, then useful notions of computation can be observed. If we look closely, we see different sources of proof complexity in the normalisation of predicate logic proofs: one is producing witnesses in Herbrand expansions by unpacking the contractive behaviour of quantifiers, another is unfolding propositional contractions. However, the elimination of cuts per se does not produce complexity, as is witnessed by cut elimination in propositional multiplicative linear logic. In Gentzen proof systems, these different mechanisms - cut on the one hand, propositional contraction and quantification onthe other - cannot be separated and individually controlled. This is a pity, and indeed one could argue that the ability to do so mightbe important for computational interpretations, and not just in classical logic, but in every (well designed) logic. Therefore a natural question is: would it be possible to obtain a good normalisation theory where the sources of complexity can be separated, i.e., different normalisation mechanisms can coexist independently? In this talk, I will report about ongoing research with Andrea Aler Tubella and Benjamin Ralph and will state several results that taken together give a positive answer to that question, if we are willing to move beyond Gentzen theory and liberalise the way we compose proofs. As a matter of fact, we are on ourway to integrating into a modular normalisation scheme a further computational mechanism associated to compression by substitution. I will also show that at the heart of these normalisation mechanisms there is a surprisingly simple inference shape and I will argue that understanding its dynamics is all it takes in order to understand most of normalisation across all proof systems for all logics.
Roman Kuznets Interpolation beyond sequent calculi: modal, intuitionistic, and intermediate logics
We will survey the recently developed method of proving Craig and Lyndon interpolation using analytic generalizations of sequent calculi, such as hypersequent, grafted hypersequent, nested sequent, linear nested sequent, and labelled sequent calculi. We discuss the pros and contras of the method, as well as the area of its applicability. The talk will include new, unpublished results on using the method for intermediate logics.
Anela Lolic Extraction of expansion trees using CERES
We define a new method for proof mining by CERES that is concerned with the extraction of expansion trees. In the original CERES method expansion trees can be extracted from proofs in normal form (proofs without quantified cuts) as a post-processing of cut-elimination. More precisely they are extracted from an ACNF, a proof with at most atomic cuts. We define a novel method avoiding proof normalization and show that expansion trees can be extracted from the resolution refutation and the corresponding proof projections. We prove that the new method asymptotically outperforms the standard method (which first computes the ACNF and then extracts the expansion tree). This is joint work with Alex Leitsch.
George Metcalfe Proof and Order
I will speak about the structure and interpretation of proofs for ordered groups, emphasizing applications of proof theory\ in algebra.
Sonia Marin Comparing BOX and ! via polarities
n 1994, Martini and Masini exhibited a translation of the modal logic S4 into full propositional linear logic LL using the exponentials ! and ? in correspondence with the modalities BOX and DIAMOND respectively. The goal of this work is to extend the comparison of the exponentials and the modalities behaviour by studying the way they can be polarised. Indeed, following from the study of their focusing behaviour, one can differenciate two classes of linear logic formulas, the positive and the negative ones, in particular ! is traditionally considered positive and ? negative. On the other hand, in the recent studies of the focusing behaviour of classical modal logic proofs, the polarities of the modalities have been distinguished as positive for DIAMOND and negative for BOX, which is not in adequacy with the linear logic exponentials anymore...
Kenji Miyamoto Provably correct (co)recursive programs through extraction
We are going to talk about program extraction based on Kreisel's modified realizability which supports (co)inductive proofs. Our main problem is how to find a computational solution of a given formula which we take as a problem specification. Program extraction is a translation from a proof into a program, which is provably a computational solution of what the input proof claims. We discuss our formal theory of program extraction, which is first-order minimal logic with partial continuous functionals and (co)inductive definitions, and give a demonstration of the proof assistant Minlog which is an implementation of our theory. We show some examples including extracted (co)recursive programs and their (automatically generated) correctness proofs w.r.t. the problem specification formulas.
Georg Moser The Embedding Lemma Revisited
We provide the first correct proof of the embedding lemma, which embeds standard axiomatisation of first-order logic into Hilbert's epsilon calculus. This is joint work with Matthias Baaz and Michel Parigot.
David Obwaller Interaction Automata and the ia2d Interpeter
We introduce interaction automata as a topological model of computation and present the conceptual plane interpreter ia2d. Interaction automata form a refinement of both interaction nets and cellular automata models that combine data deployment, memory management and structured computation mechanisms. Their local structure is inspired from pointer machines and allows an asynchronous spatial distribution of the computation. Our tool can be considered as a proof-of-concept piece of abstract hardware on which functional programs can be run in parallel. This is joint work with Stéphane Gimenez.
Lutz Straßburger Proof compression and combinatorial flows
In this talk I will present a notion of proof invariant for classical logic that is independent from a syntactic formalism, that has a polynomial correctness criterion, and that is able handle proof compression mechanisms like cut, substitution, and extension.