Skip to content

Lecture 15: Bounded Minimization and Ackermann's Function - CSCI 381 Goldberg

Homework Assignments:

(1) Ackermann’s Function Computation Exercise

  • Attempt to compute A(4, 4) by hand using pencil and paper
  • Stop when either: your hand gets tired, or you fill up a sheet of paper without completing the computation
  • Not for submission, but strongly encouraged for understanding recursion and the growth rates of functions

(2) Philosophical Analysis of Ackermann’s Function

  • Reflect on how Ackermann’s function differs philosophically from primitive recursive functions
  • Consider what makes it fundamentally different, beyond just the mathematical fact that it computes something different
  • This distinction is significant to understanding the limitations of primitive recursion

Next Class - Possible Schedule Change:

  • Next Wednesday’s class may need to be conducted offline via pre-recorded media
  • Professor may be attending a conference (confirmation expected by Monday)
  • If offline: topic will be on data structures the operating system uses for recursion (related to current topic)
  • Email announcement to follow if this change occurs

Davis (on behalf of Turing) wanted to provide a “reduced instruction set” (RISC) of mathematical functions from which all mathematically computable functions can be computed. He developed primitive recursive for this purpose but then realized (at end of today’s lecture) that primitive recursive does not suffice. Something is missing (discussed next class).

In anticipation, he created a more general class of functions termed a Primitive Recursive Class (PRC) which must have all the primitive recursive functions and possibly in addition, other initial functions and/or other mathematical techniques included.

Primitive Recursive:

  • Initial Functions: N,S,UN, S, U
  • Techniques: Composition and Recursion

Davis goes through a number of highly utilized mathematical functions and proves them to be primitive recursive to show the expressive power of primitive recursive. This leads up to Bounded Minimization (which is the goal of today’s lecture). Although he proves other important primality functions to be primitive recursive, we do not need those as of now.

LE(x,y)=α(xy)\text{LE}(x, y) = \alpha(x \ominus y)
  • Boolean returning 0 or 1
  • //PRIM#13
  • Composition alone
GT(x,y)=α(LE(x,y))\text{GT}(x, y) = \alpha(\text{LE}(x, y))
  • Boolean returning 0 or 1
  • //PRIM#14
  • Composition alone
LT(x,y)=GT(y,x)=GT(U2,2(x,y), U1,2(x,y))\text{LT}(x, y) = \text{GT}(y, x) = \text{GT}(U_{2,2}(x, y),\ U_{1,2}(x, y))
  • Boolean returning 0 or 1
  • //PRIM#15
  • Composition alone
GE(x,y)=α(LT(x,y))\text{GE}(x, y) = \alpha(\text{LT}(x, y))
  • Boolean returning 0 or 1
  • //PRIM#16
  • Composition alone

Assuming P(x)P(x) and Q(x)Q(x) are (primitive recursive) boolean predicates:

NOT EQUAL TO ZERO: α(P(x))\text{NOT EQUAL TO ZERO: } \alpha(P(x))
  • //PRIM#17
  • Composition alone
  • Since the output of P(x)P(x) is only 0 or 1
AND(P(x),Q(x))=P(x)×Q(x)\text{AND}(P(x), Q(x)) = P(x) \times Q(x)
  • //PRIM#18
  • Composition alone
  • Since the outputs of P(x)P(x) and Q(x)Q(x) are only 0 or 1
  • If P(x)P(x) or Q(x)Q(x) outputs 0, then AND outputs 0; and if P(x)P(x) AND Q(x)Q(x) output 1, then AND outputs 1
OR(P(x),Q(x))=NOT(AND(NOT(P(x)),NOT(Q(x))))\text{OR}(P(x), Q(x)) = \text{NOT}(\text{AND}(\text{NOT}(P(x)), \text{NOT}(Q(x))))
  • //PRIM#19
  • Composition alone
  • De Morgan’s Law in essence states that OR is NOT AND; AND is NOT OR
  • Originally stated in terms of Union and Intersection of sets, but the membership of Union is OR and the membership of Intersection is AND
IF P(x1,,xn) THEN F(x1,,xn) ELSE G(x1,,xn)\text{IF } P(x_1, \ldots, x_n) \text{ THEN } F(x_1, \ldots, x_n) \text{ ELSE } G(x_1, \ldots, x_n) P(x1,,xn)×F(x1,,xn)+α(P(x1,,xn))×G(x1,,xn)P(x_1, \ldots, x_n) \times F(x_1, \ldots, x_n) + \alpha(P(x_1, \ldots, x_n)) \times G(x_1, \ldots, x_n)
  • //PRIM#20
  • Composition alone
  • P(x)P(x) is Boolean and therefore can only have outputs 0 or 1
  • If P(x)P(x) is TRUE (value 1), then α\alpha (NOT) will return 0; then, the final output of this formula will be 1×F(x)+0×G(x)=F(x)1 \times F(x) + 0 \times G(x) = F(x)
  • If P(x)P(x) is FALSE (value 0), then α\alpha (NOT) will return 1; then, the final output of this formula will be 0×F(x)+1×G(x)=G(x)0 \times F(x) + 1 \times G(x) = G(x)
Σ[t=0,y][F(t,x1,,xn)]\Sigma[t=0, y] [F(t, x_1, \ldots, x_n)]
  • “Running” Sum
  • //PRIM#21
  • Base case: Σ(0,x1,,xn)=F(0,x1,,xn)\Sigma(0, x_1, \ldots, x_n) = F(0, x_1, \ldots, x_n)
  • Recursive case: Σ(y+1,x1,,xn)=ADD(Σ(y,x1,,xn),F(S(y),x1,,xn))\Sigma(y+1, x_1, \ldots, x_n) = \text{ADD}(\Sigma(y, x_1, \ldots, x_n), F(S(y), x_1, \ldots, x_n))
  • Where F(S(y),x1,,xn)=F(y+1,x1,,xn)F(S(y), x_1, \ldots, x_n) = F(y+1, x_1, \ldots, x_n)
  • Σ\Sigma is the SUM function
Π[t=0,y][F(t,x1,,xn)]\Pi[t=0, y] [F(t, x_1, \ldots, x_n)]
  • “Running” Product
  • //PRIM#22
  • Base case: Π(0,x1,,xn)=F(0,x1,,xn)\Pi(0, x_1, \ldots, x_n) = F(0, x_1, \ldots, x_n)
  • Recursive case: Π(y+1,x1,,xn)=MULT(Π(y,x1,,xn),F(S(y),x1,,xn))\Pi(y+1, x_1, \ldots, x_n) = \text{MULT}(\Pi(y, x_1, \ldots, x_n), F(S(y), x_1, \ldots, x_n))
  • Π\Pi is the PRODUCT function

Existential Quantifiers on Boolean Predicates

Section titled “Existential Quantifiers on Boolean Predicates”

Mathematicians use the upside down A notationally for “FOR ALL”

[ty][P(t,x1,,xn)]\forall[t \leq y] [P(t, x_1, \ldots, x_n)]
  • //PRIM#23
  • Is the running product of P()P() equal to 1?
  • Equivalently: EQ(Π[t=0,y][P(t,x1,,xn)],1)\text{EQ}(\Pi[t=0, y] [P(t, x_1, \ldots, x_n)], 1)
  • Is P()P() TRUE for ALL values in the range tyt \leq y?

Mathematicians use the backwards E notationally for “THERE EXISTS”

[ty][P(t,x1,,xn)]\exists[t \leq y] [P(t, x_1, \ldots, x_n)]
  • //PRIM#24
  • α(EQ(Σ[t=0,y][P(t,x1,,xn)],0))\alpha(\text{EQ}(\Sigma[t=0, y] [P(t, x_1, \ldots, x_n)], 0))
  • If P()P() is true for any such tt, then the summation of PP over all such tt would not equal 0
  • If P()P() is false for all such tt, then the summation of PP over all such tt would equal 0
  • Thus, for EXISTS to be TRUE, we don’t want that the summation of PP equals 0
  • Is P()P() TRUE for some value(s) in the range tyt \leq y?
MIN[ty][P(t,x1,,xn)]\text{MIN}[t \leq y] [P(t, x_1, \ldots, x_n)]
  • //PRIM#25
  • Composition alone
Σ[t=0,y][Π[r=0,t][α(P(r,x1,,xn))]]\Sigma[t=0, y] [\Pi[r=0, t] [\alpha(P(r, x_1, \ldots, x_n))]]

Which can be written compactly as: ΣΠα\Sigma \Pi \alpha

Interesting Note: The compact formula ΣΠα\Sigma \Pi \alpha is actually the namesake of Sigma Pi Alpha, a computer science honor society! This elegant triple combination of summation, product, and the NOT (alpha) function is fundamental to understanding primitive recursive functions.

What is the first time that P(x1,,xn)P(x_1, \ldots, x_n) is TRUE in the range tyt \leq y?

The formula works by counting how many values before the first true value the predicate is false. Since we start from value 0, when t=5t = 5 is the first time P()P() is TRUE, there are exactly 5 falses before it (at t=0,1,2,3,4t = 0, 1, 2, 3, 4), so the summation returns 5.

Key Insight: Bounded minimization is in spirit dovetailing “exists” but it’s much more than there exists. While “exists” says that somewhere between 0 and yy, PP is true, bounded minimization wants to know the first time that PP is true between 0 and yy.

In this example, t=5t = 5 is the first time that P()P() is TRUE.

Important Assumption (GIGO - Garbage In, Garbage Out): The above formula will only work if you can actually get to t=5t = 5. If the parameter y<5y < 5, then the formula won’t work as intended because you never reach the point where PP is true. Therefore, this is a “garbage in, garbage out” situation—if yy is at least 5, the formula works correctly; otherwise, the results are undefined or incorrect.

xxx=0x=0x=1x=1x=2x=2x=3x=3x=4x=4x=5x=5x=yx=y
P(x)P(x)000001

Bounded Minimization Calculation (t = 0 to t = 4)

Section titled “Bounded Minimization Calculation (t = 0 to t = 4)”
ttx=0x=0x=1x=1x=2x=2x=3x=3x=4x=4x=5x=5Note
t=0t=00α(0)=1\alpha(0) = 1 for index x=0x = 0
t=1t=100α(0)=1\alpha(0) = 1 for indices x=0,1x = 0, 1
t=2t=2000α(0)=1\alpha(0) = 1 for indices x=0,1,2x = 0, 1, 2
t=3t=30000α(0)=1\alpha(0) = 1 for indices x=0,1,2,3x = 0, 1, 2, 3
t=4t=4000001α(0)=1\alpha(0) = 1 for x=0,1,2,3,4x = 0,1,2,3,4; α(1)=0\alpha(1) = 0 for x=5x=5
α valuesα(0)=1\alpha(0)=1α(0)=1\alpha(0)=1α(0)=1\alpha(0)=1α(0)=1\alpha(0)=1α(0)=1\alpha(0)=1Product of these = 1

For each value of tt (t = 0 to 4):

  • t=0t = 0: Checking only x=0x = 0. α(P(0))=α(0)=1\alpha(P(0)) = \alpha(0) = 1. Product = 1. Sum += 1.
  • t=1t = 1: Checking x=0,1x = 0, 1. Both give α(0)=1\alpha(0) = 1. Product = 1×1=11 \times 1 = 1. Sum += 1. (Running sum = 2)
  • t=2t = 2: Checking x=0,1,2x = 0, 1, 2. All give α(0)=1\alpha(0) = 1. Product = 1. Sum += 1. (Running sum = 3)
  • t=3t = 3: Checking x=0,1,2,3x = 0, 1, 2, 3. All give α(0)=1\alpha(0) = 1. Product = 1. Sum += 1. (Running sum = 4)
  • t=4t = 4: Checking x=0,1,2,3,4x = 0, 1, 2, 3, 4. All give α(0)=1\alpha(0) = 1. Product = 1. Sum += 1. (Running sum = 5)

The SUM OF THESE VALUES (which all equal 1) is 5, the correct answer—identifying that t=5t = 5 is the first time PP is true.

Suppose we continue the calculation until y=100y = 100:

ttx=0x=0x=1x=1x=2x=2x=3x=3x=4x=4x=5x=5Note
t=5t=5000001α(1)=0\alpha(1) = 0
t=6t=6000001α(1)=0\alpha(1) = 0
t=7t=7000001α(1)=0\alpha(1) = 0
α valuesα(0)=1\alpha(0)=1α(0)=1\alpha(0)=1α(0)=1\alpha(0)=1α(0)=1\alpha(0)=1α(0)=1\alpha(0)=1α(1)=0\alpha(1)=0etc.Product = 0

Key Observation: Once we reach t=5t = 5, we have P(5)=1P(5) = 1, so α(1)=0\alpha(1) = 0 appears in the product. Since the product now contains a zero, the product becomes 0 and remains 0 for all subsequent tt values. When the product is 0, nothing is added to the sum anymore—the sum remains frozen at 5.

Key Insight: The brilliance of this formula is that it naturally terminates the contribution to the sum once PP becomes true. The product acts as a gate: it multiplies 1s (from the false values before the first true value), contributing to the sum at each step. But as soon as it encounters a 0 (from α(1)\alpha(1) when PP becomes true), the product becomes 0 and stays 0, preventing any further changes to the sum.

What happens if P()P() is FALSE for all values tyt \leq y?

Then the above formula would return y+1y + 1 since α(P(x))\alpha(P(x)) is 1 for all these indices 0, 1, …, yy.

There are those who instead embed the above formula within an IF-THEN-ELSE so that if PP is never true over the stated range, then the output will be zero. However, the above formula also returns 0 if, in fact, P(0)P(0) is TRUE.

Thus you would actually need to test “IF P(0)=TRUEP(0) = \text{TRUE}” after computing this embedded IF-THEN-ELSE.

Note: The ΣΠα\Sigma\Pi\alpha formula above is not changing - that is our major formula. However, there is this issue: if PP is always false, you will have to do a bunch of Boolean tests after the fact to make sure that it’s not always false. There’s nothing you can do about it, but that’s what you have to do.

Davis also shows a number of prime-based functions to be primitive and recursive. While very interesting, the purpose of today’s lecture was to concentrate on Boolean predicates and to lead up to bounded minimization. See Davis et al. book if you would like to explore those other functions.

The key insight is that Ackermann’s function is the main character and protagonist of this play. It demonstrates that primitive recursive functions are not sufficient to compute all computable functions.

Ackermann’s function (modified by Rosa Peter and Raphael Robinson) is recursively defined as follows:

A(m+1,n+1)={n+1if m=0A(m,1)if m>0 and n=0A(m,A(m+1,n))if m>0 and n>0A(m+1, n+1) = \begin{cases} n + 1 & \text{if } m = 0 \\ A(m, 1) & \text{if } m > 0 \text{ and } n = 0 \\ A(m, A(m+1, n)) & \text{if } m > 0 \text{ and } n > 0 \end{cases}

Ackermann’s function uses only primitive recursive components:

  • Successor: +1+1 (which is SUCC, a primitive recursive function)
  • Composition and recursion

Other than that, we have composition and recursion. Yet, Davis and others prove that Ackermann’s function CANNOT be primitive recursive.

This is remarkable and unbelievable. The above function has only +1+1, which is a primitive recursive function. Other than that, we have composition and recursion. Yet despite using only these operations, Ackermann’s function cannot be primitive recursive.

The proof that Ackermann’s function cannot be primitive recursive relies on showing that Ackermann’s function grows in value faster than any possible primitive recursive function. They call this property dominating—a dominating function is one where it grows faster than any other function.

This demonstrates a fundamental limitation of primitive recursion: there exist computable functions (Turing-computable) that are not primitive recursive. This gap between primitive recursive functions and all computable functions is what necessitates additional operators beyond composition and recursion.

How is the above formula philosophically different than primitive recursion? (Ignore the issue of top-down (m1)(m-1) vs. bottom-up (m+1)(m+1).)

There is something very majorly different between Ackermann’s function and all of our primitive recursive functions. This distinction is not just about what it computes mathematically, but what makes it fundamentally different philosophically from the primitive recursive framework.