Skip to content

Lecture 14: Primitive Recursive Functions and Recursion Theory - CSCI 381 Goldberg

Homework: There is homework involved with this lecture (particularly regarding Question 3 and Question 4 on recursive function complexity), but the instructor is not collecting it for grading. However, students should think through these problems carefully and are welcome to send explanations via email.

Reading Assignment: Students should read items 14, 15, and 16 (Boolean predicates: less than or equal, greater than, less than, greater than, equal) on their own. These will be covered in detail on Wednesday.

Today we will embark on the notion of “recursion” as it is used in computer theory, specifically to design primitive recursive functions. This approach at first glance will seem different than programming recursion as seen in the following simple recursive code for factorial:

int fact(int num) {
if (num < 0) return -1; //flag to indicate an input value where not defined
if (num == 0) return 1; //base case
//recursive step; Note: the level to be determined is num in terms of level num-1
return num * fact(num - 1);
}

The above code is a top-down approach but recursion in computer theory is a bottom-up approach as we will now explore. (Is there a difference? We will discuss in a different lecture.)

Remember that primitive recursive was an early attempt to model all computable functions using a RISC chip (Reduced Instruction Set Computer). Since computers did not yet exist, early computer theorists were asking: what is the minimum set of built-in functions and operators needed to compute all possible Turing-equivalent computations? The original hope was that the computer they wanted to build would only require a few initial functions and techniques (the ones listed above) and the two operators (composition and recursion) to achieve all possible computable functions. By keeping the instruction set minimal, they could design a simpler computer that would still be capable of universal computation.

FunctionDescriptionPRIM
N(x)N(x)always returns 0//PRIM#1
S(x)S(x)returns x+1x + 1//PRIM#2
Ui,nU_{i,n}return the iith element from a list of nn numbers (and does not change any values)//PRIM#3

The UU function deserves special attention. It solves a technical but important problem: how do you access values that are embedded in the parameter list of a function you’re calling? The Ui,nU_{i,n} function takes nn parameters and extracts the iith element, similar to indexing into a one-dimensional array. While it may seem simple, it is essential for granting access to the inside of parameter lists, which is necessary for building more complex primitive recursive functions.

Composition

Composition of functions means that the output of one function becomes the input of the next function. Think of it as a daisy chain or a pipelining of functions: each function takes an input and provides an output, which then becomes the input to the next function in the list. This process continues through the entire chain.

Recursion (as described next)

This type of recursion has three categories of information at its disposal (disposal means can be thrown away; this term is accurate here to emphasize that one is not required to use all the information provided):

Single VariableMultivariable
INFO#1Static Input xxx1,,xnx_1, \ldots, x_n
INFO#2Previous Level Number yyyy
INFO#3Previous Level’s Output REC(x,y)\text{REC}(x, y)REC(x1,,xn,y)\text{REC}(x_1, \ldots, x_n, y)

The next level (y+1)(y+1) computes the next value using a function gg that has all three categories available: g(y),[REC(x1,,xn,y)],[x1,,xn]g(y), [\text{REC}(x_1, \ldots, x_n, y)], [x_1, \ldots, x_n]. The key insight is that the function gg has no obligation to use all three pieces of information—it can use whichever subset is necessary. This flexibility is crucial for building diverse recursive functions, and the emphasis on “disposal” reminds us that unused information can simply be discarded.

Davis (and others) show that in fact this approach is NOT sufficient to mathematically model ALL possible computable functions. So, perhaps if we had one more Initial Function and/or one more Basic Technique (Operator) and add it to the definition to Primitive Recursive, the augmented system will then describe all computable functions. In a future class, we will present Davis who shows that the Initial Functions are in fact sufficient, but we are missing another operator. In anticipation of that discussion, Davis introduces a Primitive Recursive Class (PRC) that includes all functions that can be formulated by the initial functions utilizing composition and recursion, BUT can also use extra initial functions and/or extra operators. We will be leading up to the primitive recursive function that Davis realizes is too constrained for all computable functions.

ADD(x,y)\text{ADD}(x, y) //ADD is recursive Successor (S) //PRIM#4

  • Base case: ADD(x,0)=U1,2(x,0)\text{ADD}(x, 0) = U_{1,2}(x, 0) (which equals xx)
  • Recursive case: ADD(x,y+1)=S(ADD(x,y))\text{ADD}(x, y+1) = S(\text{ADD}(x, y)) (which equals (x+y)+1=x+(y+1)(x+y) + 1 = x + (y+1))
Levelxx // inputyy // inputADD(x,y)\text{ADD}(x, y) // previous levelS(ADD(x,y))S(\text{ADD}(x, y)) // future levelADD(x,y+1)\text{ADD}(x, y+1) // output
0xx0--xx
1xx1xx(x)+1(x)+1x+1x+1
2xx2x+1x+1(x+1)+1(x+1)+1x+2x+2
3xx3x+2x+2(x+2)+1(x+2)+1x+3x+3
4xx4x+3x+3(x+3)+1(x+3)+1x+4x+4
5xx5x+4x+4(x+4)+1(x+4)+1x+5x+5

MULT(x,y)\text{MULT}(x, y) //MULT is recursive Addition (ADD) //PRIM#5

  • Base case: MULT(x,0)=N(x)\text{MULT}(x, 0) = N(x) (which equals 00)
  • Recursive case: MULT(x,y+1)=ADD(MULT(x,y),x)\text{MULT}(x, y+1) = \text{ADD}(\text{MULT}(x, y), x) (which equals xy+x=x(y+1)x \cdot y + x = x \cdot (y+1))
Levelxx // inputyy // inputMULT(x,y)\text{MULT}(x, y) // previous levelADD(MULT(x,y),x)\text{ADD}(\text{MULT}(x, y), x) // future levelMULT(x,y+1)\text{MULT}(x, y+1) // output
0xx0--0
1xx10(0)+x(0)+xxx
2xx2xx(x)+x(x)+x2x2x
3xx32x2x(2x)+x(2x)+x3x3x
4xx43x3x(3x)+x(3x)+x4x4x
5xx54x4x(4x)+x(4x)+x5x5x

POWER(x,y)\text{POWER}(x, y) //PRIM#6

  • Base case: POWER(x,0)=S(N(x))\text{POWER}(x, 0) = S(N(x)) (which equals 11)
  • Recursive case: POWER(x,y+1)=MULT(POWER(x,y),x)\text{POWER}(x, y+1) = \text{MULT}(\text{POWER}(x, y), x) (which equals xyx=x(y+1)x^y \cdot x = x^{(y+1)})
Levelxx // inputyy // inputPOWER(x,y)\text{POWER}(x, y) // previous levelMULT(POWER(x,y),x)\text{MULT}(\text{POWER}(x, y), x) // future levelPOWER(x,y+1)\text{POWER}(x, y+1) // output
0xx0--1
1xx11(1)x(1) \cdot xxx
2xx2xx(x)x(x) \cdot xx2x^2
3xx3x2x^2(x2)x(x^2) \cdot xx3x^3
4xx4x3x^3(x3)x(x^3) \cdot xx4x^4
5xx5x4x^4(x4)x(x^4) \cdot xx5x^5

Note: It is because of this base case that Davis defined 00=10^0 = 1, eventhough mathematics states that 000^0 is indeterminate and cannot be defined. Davis had to do this since ALL primitive recursive functions must be total since they were an attempt to model all computable functions which are total. Remember that Turing maps a memory configuration (called the “input”) to another memory configuration (called the “output”). As an input, the value 0 has a valid memory configuration (a string of 0s).

FACT(x,y)\text{FACT}(x, y) //PRIM#7

  • Base case: FACT(0)=S(N(x))\text{FACT}(0) = S(N(x)) (which equals 11)
  • Recursive case: FACT(y+1)=MULT(FACT(y),S(y))\text{FACT}(y+1) = \text{MULT}(\text{FACT}(y), S(y)) (which equals y!(y+1)=(y+1)!y! \cdot (y+1) = (y+1)!)
Levelyy // inputFACT(y)\text{FACT}(y) // previous levelMULT(FACT(y),S(y))\text{MULT}(\text{FACT}(y), S(y)) // future levelFACT(y+1)\text{FACT}(y+1) // output
00--1
111(1)(0+1)(1) \cdot (0+1)1
221(1)(1+1)(1) \cdot (1+1)2
332(2)(2+1)(2) \cdot (2+1)6
446(6)(3+1)(6) \cdot (3+1)24
5524(24)(4+1)(24) \cdot (4+1)120

(Q#1) We start with 00 (N(x)N(x)) and we add one (S(x)S(x)); all functions seem to be increasing. Where is Predecessor, which subtracts one and is the basis for subtraction?

Answer (Davis): The predecessor function is actually embedded within the recursion schema itself. When we compute the next level y+1y+1, we have access to the previous level number yy, which is precisely the predecessor (minus one) of y+1y+1. This may seem like a sharp or abstract answer, but it reveals an important insight: the UU function extracts yy from the available information, effectively giving us the predecessor of the current level. Therefore, the machinery to compute PRED\text{PRED} and achieve subtraction is already present in the basic recursive structure.

(Q#2) We have Multiplication. Where is the Division operator? (To whatever extent that integer division can provide.)

(Q#3) If one recalls how we program factorial in actual code, the main recursive call is nfact(n1)n*\text{fact}(n-1) which is a top-down computation, but ALL primitive recursive functions are bottom-up, starting at base case 0 and then computing the next level (y+1)(y+1). Are these the same process?

Answer: This will be discussed in a different lecture.

(Q#4) We will show that each recursive function (similar to any of the above) can be implemented by a single for-loop. Specifically:

  • Addition is Recursive Successor (built-in SS function applied repeatedly)
  • Multiplication is Recursive Addition
  • Exponentiation (Power) is Recursive Multiplication

This creates a triple-nested for-loop structure using only built-in commands like n(x)=0n(x) = 0 and s(x)=x+1s(x) = x+1. We know from algorithm analysis that built-in commands have constant time complexity O(1)O(1) (as opposed to library calls/APIs).

At first glance, the answer seems straightforward:

  • Addition would appear to be O(n)O(n) (one for-loop calling the constant-time successor function)
  • Multiplication would appear to be O(n2)O(n^2) (nested for-loops: outer loop runs nn times, inner loop is addition which is O(n)O(n))
  • Power would appear to be O(n3)O(n^3) (Power is recursive multiplication)

However, what is the true time complexity of this code? Is it actually O(n3)O(n^3)? (Hint: This is a much harder problem than it seems to be.) The straightforward analysis may not capture the actual computational behavior due to the specific way these functions are composed recursively.

(Q#5) What happened to negative numbers?

Answer: Turing’s conceptualization of functions is fundamentally different from the algebraic perspective. In Turing’s view, functions are not about arithmetic values per se, but about mapping one memory configuration to another. A memory configuration is a finite string of symbols (in the binary case, a string of 0s and 1s). The value 0 is the smallest possible memory configuration (represented as all 0s), and there is no negative configuration—all memory configurations are naturally interpreted as non-negative integers. Therefore, negative numbers play no role in the theory since every valid input (a finite string of memory) can be represented by a natural number.

We now continue our development of Primitive Recursive functions. At the end of our discussion on primitive recursive, we will get the impression on just how expressive/powerful primitive recursive functions are.

PRED(y)\text{PRED}(y)

  • Base case: PRED(0)=N(“X”)\text{PRED}(0) = N(\text{``X''}) (which equals 00)
  • Recursive case: PRED(y+1)=y\text{PRED}(y+1) = y using g(y,PRED(y))=U1,2(y,PRED(y))g(y, \text{PRED}(y)) = U_{1,2}(y, \text{PRED}(y))

Note: PRED is built-in “cutoff” of 0 for outputs. //PRIM#8

Key Insight: A sharp observation from Davis is that the predecessor function is actually embedded within recursion itself. Recall that when computing the next level y+1y+1 using the recursive schema, we have access to three categories of information: the static input, the current level number yy, and the previous level’s output. When computing at level y+1y+1, the previous level number is yy, which is precisely the predecessor of y+1y+1. Therefore, the subtraction operator (minus one) is inherently available through the recursive structure, even though it is not explicitly listed as a basic technique.

SUB(x,y) written as xy (the “o” denotes zero — the result floors at 0)\text{SUB}(x, y) \text{ written as } x \ominus y \text{ (the ``o'' denotes zero --- the result floors at 0)}
  • Base case: SUB(x,0)=x\text{SUB}(x, 0) = x
  • Recursive case: SUB(x,y+1)=PRED(SUB(x,y))\text{SUB}(x, y+1) = \text{PRED}(\text{SUB}(x, y))

We must use cutoff subtraction rather than ordinary subtraction because in the context of Turing machines and memory configurations, all values are non-negative integers. There is no representation for negative numbers—all memory configurations are interpreted as natural numbers. Therefore, when y>xy > x, the subtraction would result in a negative value, which is impossible. The notation xyx \ominus y indicates that if the result would be negative, we instead return 0. This “cutoff” at zero ensures the function remains total and always produces a valid memory configuration.

Note: Built-in “cutoff” of 0 for outputs. //PRIM#9

xy|x - y|

ADIFF(x,y)=SUB(x,y)+SUB(y,x)\text{ADIFF}(x, y) = \text{SUB}(x, y) + \text{SUB}(y, x)
  • //PRIM#10
  • Composition alone
  • SUB(y,x)\text{SUB}(y, x) is technically SUB(U2,2(x,y), U1,2(x,y))\text{SUB}(U_{2,2}(x, y),\ U_{1,2}(x, y)) using projection functions

The next set of nine primitive recursive functions are all Boolean Predicates.

Boolean Predicates P(x)P(x) or P(x1,,xn)P(x_1, \ldots, x_n) have some requirements:

  1. Output is 0/1 where 0 represents FALSE and 1 represents TRUE
  2. Boolean Predicate is a Total mapping/function. (Hardware people debate this and allow a partial function.)
  3. The inputs are 0/1. Karp requires this but Davis does not.

α(x)\alpha(x) //Boolean returning 0 or 1

α(x)=1x\alpha(x) = 1 \ominus x //akin to NOT in C-based programming where TRUE is >= 1 //Composition alone

Behavior: The α\alpha function outputs 1 if and only if x=0x = 0, and outputs 0 for all x1x \geq 1. Here’s why: when x=0x = 0, we compute 10=11 \ominus 0 = 1. When x=1x = 1, we compute 11=01 \ominus 1 = 0. When x>1x > 1, the cutoff subtraction gives 1x=01 \ominus x = 0 (since we cannot go below zero).

//PRIM#11

x=yx = y

EQ(x,y)=α(xy)\text{EQ}(x, y) = \alpha(|x - y|) //Boolean returning 0 or 1 //Composition alone

Behavior: The EQ\text{EQ} function returns 1 if and only if x=yx = y, and returns 0 otherwise. Here’s the reasoning: when x=yx = y, the absolute difference xy=0|x - y| = 0, so α(0)=1\alpha(0) = 1. When xyx \neq y, the absolute difference xy|x - y| is some positive value, so α(positive value)=0\alpha(\text{positive value}) = 0.

//PRIM#12

The next lecture will continue with the rest of the boolean predicates.