YES Problem: f(f(a())) -> f(g(n__f(a()))) f(X) -> n__f(X) activate(n__f(X)) -> f(X) activate(X) -> X Proof: DP Processor: DPs: f#(f(a())) -> f#(g(n__f(a()))) activate#(n__f(X)) -> f#(X) TRS: f(f(a())) -> f(g(n__f(a()))) f(X) -> n__f(X) activate(n__f(X)) -> f(X) activate(X) -> X Arctic Interpretation Processor: dimension: 1 interpretation: [activate#](x0) = 4x0 + -16, [f#](x0) = x0 + -16, [activate](x0) = 4x0 + 0, [g](x0) = -5x0 + 4, [n__f](x0) = -3x0 + 0, [f](x0) = x0 + 2, [a] = 5 orientation: f#(f(a())) = 5 >= 4 = f#(g(n__f(a()))) activate#(n__f(X)) = 1X + 4 >= X + -16 = f#(X) f(f(a())) = 5 >= 4 = f(g(n__f(a()))) f(X) = X + 2 >= -3X + 0 = n__f(X) activate(n__f(X)) = 1X + 4 >= X + 2 = f(X) activate(X) = 4X + 0 >= X = X problem: DPs: TRS: f(f(a())) -> f(g(n__f(a()))) f(X) -> n__f(X) activate(n__f(X)) -> f(X) activate(X) -> X Qed