Complexity Competition

This page provides general information on the complexity category; (potential) tool authors or others, may want to go directly to the rules page; aficionados may be interested in the techniques page.

Introduction

It is a challenging topic to automatically determine upper and lower bounds on the complexity of term rewrite systems (TRSs for short). Here complexity of a TRS basically refers to the maximal length of derivations, measured in the size of the initial terms. Still, depending on rewrite strategies and restrictions on the set of allowed starting terms, various notions of complexity exist. The current focus of the competition is on providing polynomial upper bounds to the complexity of TRSs. Eventually the competition will be extended to lower bounds.

Overview of the Complexity Category

Currently, depending on considered set of starting terms and strategy, the competition is divided into four categories:

Derivational Complexity Runtime Complexity
Full Rewriting DC RC
Innermost Rewriting iDC iRC

Derivational complexity and runtime complexity differ in the sense that for derivational complexity, no restrictions on the set of considered starting terms is put. For runtime complexity however, only basic terms (i.e., terms where arguments are formed from constructor symbols) are taken into account. See for example (Hirokawa, Moser, 2008) (Moser, 2009) for further reading on the notion of runtime complexity.

Furthermore, we distinguish between complexities induced by full rewriting as opposed to complexities induced by innermost rewriting.

Overview of the Competition

The complexity category exists since 2008 and is part of the termination competition. Since 2014 the competition is hosted on StarExec, detailed results of current or previous competitions can be found on the Termination Portal or here.

Participate

If you want to participate in the competition, check the Termination-Portal.

Participating Teams of Past and Upcoming Competitions

Here you can find a list (sorted by tool name) of participants of past competitions and supposed participants of the upcoming competition.

Tool Internal Page 2008 2009 2010 2011 2012 2013 2014 2015
AProVE Tools:AProVE --- --- x x x x x x
CaT Tools:CaT x x x x x x x ---
Matchbox Tools:Matchbox --- x x x --- --- --- x
Oops Tools:Oops --- --- x --- --- --- --- ---
TCT Tools:TCT x x x x x x x x

An overview of the proof techniques applied by the participants in past competitions can be found here.

For detailed information on the testsuites employed and the actual results of the tools see the Termination-Portal.

Overview of the Competition Rules

Problems are given in the new TPDB-format, see Termination-Portal, and are selected randomly from the newest version of the Termination Problem Database. In the selection process, emphasis is put onto selecting problems that could not be automatically verified in past competitions, c.f. page problem selection for details. Tools are ranked not only by number of succeeded runs, but in the scoring also the preciseness of the bounds is taken into account. See page scoring for further details. Unfortunately the database is currently somehow biased towards termination problems. We encourage everybody to submit new problems to the TPDB that are interesting from a complexity related perspective. (In submission please indicate that these examples should go into the complexity benchmark.)