Lecture 14: Primitive Recursive Functions and Recursion Theory - CSCI 381 Goldberg
Administrative Updates
Section titled “Administrative Updates”Homework: There is homework involved with this lecture (particularly regarding Question 3 and Question 4 on recursive function complexity), but the instructor is not collecting it for grading. However, students should think through these problems carefully and are welcome to send explanations via email.
Reading Assignment: Students should read items 14, 15, and 16 (Boolean predicates: less than or equal, greater than, less than, greater than, equal) on their own. These will be covered in detail on Wednesday.
Primitive Recursive Functions (Basic)
Section titled “Primitive Recursive Functions (Basic)”Overview
Section titled “Overview”Today we will embark on the notion of “recursion” as it is used in computer theory, specifically to design primitive recursive functions. This approach at first glance will seem different than programming recursion as seen in the following simple recursive code for factorial:
int fact(int num) { if (num < 0) return -1; //flag to indicate an input value where not defined if (num == 0) return 1; //base case
//recursive step; Note: the level to be determined is num in terms of level num-1 return num * fact(num - 1);}The above code is a top-down approach but recursion in computer theory is a bottom-up approach as we will now explore. (Is there a difference? We will discuss in a different lecture.)
Remember that primitive recursive was an early attempt to model all computable functions using a RISC chip (Reduced Instruction Set Computer). Since computers did not yet exist, early computer theorists were asking: what is the minimum set of built-in functions and operators needed to compute all possible Turing-equivalent computations? The original hope was that the computer they wanted to build would only require a few initial functions and techniques (the ones listed above) and the two operators (composition and recursion) to achieve all possible computable functions. By keeping the instruction set minimal, they could design a simpler computer that would still be capable of universal computation.
Initial Functions
Section titled “Initial Functions”| Function | Description | PRIM |
|---|---|---|
| always returns 0 | //PRIM#1 | |
| returns | //PRIM#2 | |
| return the th element from a list of numbers (and does not change any values) | //PRIM#3 |
The function deserves special attention. It solves a technical but important problem: how do you access values that are embedded in the parameter list of a function you’re calling? The function takes parameters and extracts the th element, similar to indexing into a one-dimensional array. While it may seem simple, it is essential for granting access to the inside of parameter lists, which is necessary for building more complex primitive recursive functions.
Techniques Provided
Section titled “Techniques Provided”Composition
Composition of functions means that the output of one function becomes the input of the next function. Think of it as a daisy chain or a pipelining of functions: each function takes an input and provides an output, which then becomes the input to the next function in the list. This process continues through the entire chain.
Recursion (as described next)
This type of recursion has three categories of information at its disposal (disposal means can be thrown away; this term is accurate here to emphasize that one is not required to use all the information provided):
| Single Variable | Multivariable | |
|---|---|---|
| INFO#1 | Static Input | |
| INFO#2 | Previous Level Number | |
| INFO#3 | Previous Level’s Output |
The next level computes the next value using a function that has all three categories available: . The key insight is that the function has no obligation to use all three pieces of information—it can use whichever subset is necessary. This flexibility is crucial for building diverse recursive functions, and the emphasis on “disposal” reminds us that unused information can simply be discarded.
Limitations and Extensions
Section titled “Limitations and Extensions”Davis (and others) show that in fact this approach is NOT sufficient to mathematically model ALL possible computable functions. So, perhaps if we had one more Initial Function and/or one more Basic Technique (Operator) and add it to the definition to Primitive Recursive, the augmented system will then describe all computable functions. In a future class, we will present Davis who shows that the Initial Functions are in fact sufficient, but we are missing another operator. In anticipation of that discussion, Davis introduces a Primitive Recursive Class (PRC) that includes all functions that can be formulated by the initial functions utilizing composition and recursion, BUT can also use extra initial functions and/or extra operators. We will be leading up to the primitive recursive function that Davis realizes is too constrained for all computable functions.
Examples
Section titled “Examples”Addition
Section titled “Addition”//ADD is recursive Successor (S) //PRIM#4
- Base case: (which equals )
- Recursive case: (which equals )
| Level | // input | // input | // previous level | // future level | // output |
|---|---|---|---|---|---|
| 0 | 0 | - | - | ||
| 1 | 1 | ||||
| 2 | 2 | ||||
| 3 | 3 | ||||
| 4 | 4 | ||||
| 5 | 5 |
Multiplication
Section titled “Multiplication”//MULT is recursive Addition (ADD) //PRIM#5
- Base case: (which equals )
- Recursive case: (which equals )
| Level | // input | // input | // previous level | // future level | // output |
|---|---|---|---|---|---|
| 0 | 0 | - | - | 0 | |
| 1 | 1 | 0 | |||
| 2 | 2 | ||||
| 3 | 3 | ||||
| 4 | 4 | ||||
| 5 | 5 |
Exponentiation
Section titled “Exponentiation”//PRIM#6
- Base case: (which equals )
- Recursive case: (which equals )
| Level | // input | // input | // previous level | // future level | // output |
|---|---|---|---|---|---|
| 0 | 0 | - | - | 1 | |
| 1 | 1 | 1 | |||
| 2 | 2 | ||||
| 3 | 3 | ||||
| 4 | 4 | ||||
| 5 | 5 |
Note: It is because of this base case that Davis defined , eventhough mathematics states that is indeterminate and cannot be defined. Davis had to do this since ALL primitive recursive functions must be total since they were an attempt to model all computable functions which are total. Remember that Turing maps a memory configuration (called the “input”) to another memory configuration (called the “output”). As an input, the value 0 has a valid memory configuration (a string of 0s).
Factorial
Section titled “Factorial”//PRIM#7
- Base case: (which equals )
- Recursive case: (which equals )
| Level | // input | // previous level | // future level | // output |
|---|---|---|---|---|
| 0 | 0 | - | - | 1 |
| 1 | 1 | 1 | 1 | |
| 2 | 2 | 1 | 2 | |
| 3 | 3 | 2 | 6 | |
| 4 | 4 | 6 | 24 | |
| 5 | 5 | 24 | 120 |
Questions to Ponder
Section titled “Questions to Ponder”(Q#1) We start with () and we add one (); all functions seem to be increasing. Where is Predecessor, which subtracts one and is the basis for subtraction?
Answer (Davis): The predecessor function is actually embedded within the recursion schema itself. When we compute the next level , we have access to the previous level number , which is precisely the predecessor (minus one) of . This may seem like a sharp or abstract answer, but it reveals an important insight: the function extracts from the available information, effectively giving us the predecessor of the current level. Therefore, the machinery to compute and achieve subtraction is already present in the basic recursive structure.
(Q#2) We have Multiplication. Where is the Division operator? (To whatever extent that integer division can provide.)
(Q#3) If one recalls how we program factorial in actual code, the main recursive call is which is a top-down computation, but ALL primitive recursive functions are bottom-up, starting at base case 0 and then computing the next level . Are these the same process?
Answer: This will be discussed in a different lecture.
(Q#4) We will show that each recursive function (similar to any of the above) can be implemented by a single for-loop. Specifically:
- Addition is Recursive Successor (built-in function applied repeatedly)
- Multiplication is Recursive Addition
- Exponentiation (Power) is Recursive Multiplication
This creates a triple-nested for-loop structure using only built-in commands like and . We know from algorithm analysis that built-in commands have constant time complexity (as opposed to library calls/APIs).
At first glance, the answer seems straightforward:
- Addition would appear to be (one for-loop calling the constant-time successor function)
- Multiplication would appear to be (nested for-loops: outer loop runs times, inner loop is addition which is )
- Power would appear to be (Power is recursive multiplication)
However, what is the true time complexity of this code? Is it actually ? (Hint: This is a much harder problem than it seems to be.) The straightforward analysis may not capture the actual computational behavior due to the specific way these functions are composed recursively.
(Q#5) What happened to negative numbers?
Answer: Turing’s conceptualization of functions is fundamentally different from the algebraic perspective. In Turing’s view, functions are not about arithmetic values per se, but about mapping one memory configuration to another. A memory configuration is a finite string of symbols (in the binary case, a string of 0s and 1s). The value 0 is the smallest possible memory configuration (represented as all 0s), and there is no negative configuration—all memory configurations are naturally interpreted as non-negative integers. Therefore, negative numbers play no role in the theory since every valid input (a finite string of memory) can be represented by a natural number.
Primitive Recursive Functions (Advanced)
Section titled “Primitive Recursive Functions (Advanced)”Overview
Section titled “Overview”We now continue our development of Primitive Recursive functions. At the end of our discussion on primitive recursive, we will get the impression on just how expressive/powerful primitive recursive functions are.
Additional Primitive Recursive Functions
Section titled “Additional Primitive Recursive Functions”Predecessor
Section titled “Predecessor”
- Base case: (which equals )
- Recursive case: using
Note: PRED is built-in “cutoff” of 0 for outputs. //PRIM#8
Key Insight: A sharp observation from Davis is that the predecessor function is actually embedded within recursion itself. Recall that when computing the next level using the recursive schema, we have access to three categories of information: the static input, the current level number , and the previous level’s output. When computing at level , the previous level number is , which is precisely the predecessor of . Therefore, the subtraction operator (minus one) is inherently available through the recursive structure, even though it is not explicitly listed as a basic technique.
Cutoff Subtraction
Section titled “Cutoff Subtraction”- Base case:
- Recursive case:
We must use cutoff subtraction rather than ordinary subtraction because in the context of Turing machines and memory configurations, all values are non-negative integers. There is no representation for negative numbers—all memory configurations are interpreted as natural numbers. Therefore, when , the subtraction would result in a negative value, which is impossible. The notation indicates that if the result would be negative, we instead return 0. This “cutoff” at zero ensures the function remains total and always produces a valid memory configuration.
Note: Built-in “cutoff” of 0 for outputs. //PRIM#9
Absolute Difference
Section titled “Absolute Difference”
- //PRIM#10
- Composition alone
- is technically using projection functions
Boolean Predicates
Section titled “Boolean Predicates”The next set of nine primitive recursive functions are all Boolean Predicates.
Boolean Predicates or have some requirements:
- Output is 0/1 where 0 represents FALSE and 1 represents TRUE
- Boolean Predicate is a Total mapping/function. (Hardware people debate this and allow a partial function.)
- The inputs are 0/1. Karp requires this but Davis does not.
Not Equal to Zero
Section titled “Not Equal to Zero”//Boolean returning 0 or 1
//akin to NOT in C-based programming where TRUE is >= 1 //Composition alone
Behavior: The function outputs 1 if and only if , and outputs 0 for all . Here’s why: when , we compute . When , we compute . When , the cutoff subtraction gives (since we cannot go below zero).
//PRIM#11
Equality
Section titled “Equality”
//Boolean returning 0 or 1 //Composition alone
Behavior: The function returns 1 if and only if , and returns 0 otherwise. Here’s the reasoning: when , the absolute difference , so . When , the absolute difference is some positive value, so .
//PRIM#12
Next Steps
Section titled “Next Steps”The next lecture will continue with the rest of the boolean predicates.