YES Problem: f(n__f(n__a())) -> f(n__g(f(n__a()))) f(X) -> n__f(X) a() -> n__a() g(X) -> n__g(X) activate(n__f(X)) -> f(X) activate(n__a()) -> a() activate(n__g(X)) -> g(X) activate(X) -> X Proof: DP Processor: DPs: f#(n__f(n__a())) -> f#(n__a()) f#(n__f(n__a())) -> f#(n__g(f(n__a()))) activate#(n__f(X)) -> f#(X) activate#(n__a()) -> a#() activate#(n__g(X)) -> g#(X) TRS: f(n__f(n__a())) -> f(n__g(f(n__a()))) f(X) -> n__f(X) a() -> n__a() g(X) -> n__g(X) activate(n__f(X)) -> f(X) activate(n__a()) -> a() activate(n__g(X)) -> g(X) activate(X) -> X Arctic Interpretation Processor: dimension: 1 interpretation: [activate#](x0) = x0 + 4, [g#](x0) = 1, [a#] = 0, [f#](x0) = x0 + 0, [activate](x0) = 2x0 + 13, [g](x0) = 8, [a] = 2, [n__g](x0) = 0, [f](x0) = 2x0 + 0, [n__f](x0) = 1x0, [n__a] = 0 orientation: f#(n__f(n__a())) = 1 >= 0 = f#(n__a()) f#(n__f(n__a())) = 1 >= 0 = f#(n__g(f(n__a()))) activate#(n__f(X)) = 1X + 4 >= X + 0 = f#(X) activate#(n__a()) = 4 >= 0 = a#() activate#(n__g(X)) = 4 >= 1 = g#(X) f(n__f(n__a())) = 3 >= 2 = f(n__g(f(n__a()))) f(X) = 2X + 0 >= 1X = n__f(X) a() = 2 >= 0 = n__a() g(X) = 8 >= 0 = n__g(X) activate(n__f(X)) = 3X + 13 >= 2X + 0 = f(X) activate(n__a()) = 13 >= 2 = a() activate(n__g(X)) = 13 >= 8 = g(X) activate(X) = 2X + 13 >= X = X problem: DPs: TRS: f(n__f(n__a())) -> f(n__g(f(n__a()))) f(X) -> n__f(X) a() -> n__a() g(X) -> n__g(X) activate(n__f(X)) -> f(X) activate(n__a()) -> a() activate(n__g(X)) -> g(X) activate(X) -> X Qed