This page provides the rules for the complexity competition.
Problems are given in the newly TPDB-format, cf. Termination-Portal. where the XML-element problem will have the type complexity given. Further, depending on the category DC, iDC, RC and iRC, the attributes strategy and startterm will be set to FULL/INNERMOST and full/constructor-based respectively.
The output format is adapted so that additional information on the asymptotic complexity is given for lower as well as upper bounds. Hence the output written to the first line of STDOUT shall be a complexity statement according to the following grammar:
S -> NO | MAYBE | YES( F, F) | YES( ?, F) | YES( F, ?) F -> O(1) | O(n^Nat) | POLY
where "Nat" is a non-zero natural number and YES(F1, F2) means F2 is upper bound and that F1 is a lower-bound. "O(n^k)" is the usual big-Oh notation and "POLY" indicates an unspecified polynomial. Either of the functions F1, F2 (but not both) may be replaced by ``don't know, indicated by ?. Any remaining output on STDOUT will be considered as proof output and has to follow the normal rules for the competition.
In all categories relative rewriting rules are supported. While the semantics of relative rewriting is clear for full rewriting, there is a choice howto define relative rewriting in the innermost case: in order to innermost rewrite R modulo S, we employ innermost rewriting with respect to the union of R and S.
Currently we focus on (polynomial) upper bounds. As the output format indicates, this restriction should be lifted later, see below. In order to take into account the quality of the upper bound provided by the different tools, we propose the following scoring algorithm, where we suppose the number of competitors is x.
Firstly, for each TRS the competing tools are ranked, where constant complexity, i.e., output "YES(?,O(1))" is best and "MAYBE", "NO" or time-out is worst. As long as the output is of form "YES(?,O(n^k))" or "YES(?,POLY)" the rank of the tool defines the number of points. More precisely the best tool gets x+1 points, the second gets x points and so on. On the other hand a negative output ("MAYBE", "NO" or time-out) gets 0 points. If two or more tools would get the same rank, the rank of the remaining tools is adapted in the usual way.
Secondly, all resulting points for all considered systems are summed up and the contestant with the highest number of points wins. If this cannot establish a winner, the total number of wins is counted. If this still doesn't produce a winner, we give up and provide two (or more) winners.
All authors of complexity tools that participated in the last complexity competition agreed upon the following selection of examples from the TPDB. For simplicity, we decided on using the same testbed for full and innermost rewriting. However, we decided on two separate testbeds for derivational and runtime complexity analysis. The testbeds are described below in detail, additionally a list of problems for TPDB version 8.0 is provided for RC and DC. For the individual subcategories RC, RCi, DC and DCi all strategy and start-term annotations should be overwritten appropriately. Thus we simply ignore for instance context-sensitive strategies.
For derivational complexity analysis we restrict the TPDB as follows:
For runtime complexity analysis, we further restrict the testbed for derivational complexity analysis as follows:
On the above described testbeds, a randomly chosen subset that is actually used in the competition is determined as follows. Here we denote by select the function that relates each family from the TPDB to the number of randomly chosen examples within this family as defined for the termination competition. The idea is to make select aware of different difficulties of proving complexity bounds. We do so by
In the following test cases we restrict to full rewriting.
test cases - derivational complexityR = {a(b(x)) -> b(a(x))} expected output "YES(?,O(n^2))" or "YES(O(n^1),O(n^2))" or "YES(O(n^2),O(n^2))" R = {a(a(x)) -> b(c(x)), b(b(x)) -> a(c(x)), c(c(x)) -> a(b(x))} expected output "YES(O(n^2),?)" or "YES(?,?)" R = {+(s(x),+(y,z)) -> +(x,+(s(s(y)),z)), +(s(x),+(y,+(z,w))) -> +(x,+(z,+(y,w)))} expected output "YES(?,?)"
test cases - runtime complexity
R = {a(b(x)) -> b(b(a(x)))} expected output "YES(?,O(n^1))" or "YES(O(n^1),O(n^1))" R = {plus(0,y) -> y, plus(s(x),y) -> s(plus(x,y)), mul(0,y) -> 0, mul(s(x),y) -> plus(mul(x,y),y)} expected output "YES(?,O(n^2))" or "YES(O(n^1),O(n^2))" or "YES(O(n^2),O(n^2))" R = {f(x,0) -> s(0), f(s(x),s(y)) -> s(f(x,y)), g(0,x) -> g(f(x,x),x)} expected output "YES(?,O(n^1))" or "YES(O(n^1),O(n^1))" R = {f(0) -> c, f(s(x)) -> c(f(x),f(x))} expected output "YES(?,?)"
In the following test cases we restrict to innermost rewriting.
test cases - derivational complexity
R = {f(x) -> c(x,x)} expected output "YES(O(n^1),O(n^1))" or "YES(?,O(n^1))"
test cases - runtime complexity
R = {f(x) -> c(x,x), g(0) -> 0, g(s(x)) -> f(g(x))} expected output "YES(O(n^1),O(n^1))" or "YES(?,O(n^1))"