What we're about to embark on today is a discussion which, from the theory... Well, practical and theory point of view is interesting, and that deals with the notion, the general notion of recursion. We're going to... I'm going to take the theory approach first, and then I will go to the... One second, this is actually the second tab. And then I'm going to go to the common programming approach, the common... Yeah, operating system approach, let's call it that. Anyway, it's about recursion. Another name for computability, the ability to compute or computer theory, is recursion theory. Okay, so let's emphasize the fact that another name for this course, in fact, is recursion theory. I'm going to put an underline there. How do you do that? Okay, let's see if I can underline. How do you underline? There's some way to do it. All right, I'll do that separately. Okay, it's recursion theory. So... Oh, now I know. One second. Yeah, there we go. Now, so we mentioned, when we talked about partial functions earlier, earlier, so the notion of partially computable functions, the computer version, are called recursively enumerable. And the explanation, maybe I should put this in here, is like this. So a first approach in the theory was to identify those basis or initial functions functions and corresponding mathematical techniques that could generate the ability to recursively enumerate. All right, so let's get a, you know, let me get a definition here first. I think we need... We once discussed recursively enumerable, but I think we would benefit from a short description. In mathematics, in mathematics, partial functions refer to mappings between sets or from one set to another, where not every element of the first set has to go, has to connect to an element of the second set. Okay? So that's, in mathematics, that's a partial function. Now, computer theory has a different, I guess, a different angle, as it were, a different perspective, different angle, maybe, on what is partial. And that is from Turing's perspective. So from Turing's perspective... From Turing's perspective, every. .. From Turing's perspective, a function... As we've mentioned this many times, a function does not map elements, per se. Does not map values or elements, data values, per se. But rather, what? What just happened? All right, let's go down here. There we go. Does not map data values, does not map data values, per se, but rather, the underlying memory configurations, right? Okay? So again, in mathematics, you're mapping a function, which is a value, a set of values, to another set of values, right? Partial functions refer to map from one set of values to another, to another, right? Where not every element of the first set has to connect to an element of the second set. So we have a different perspective. We have a different angle. What's our angle? From Turing's perspective, a function does not map data values, per se. Does not map data values, per se, but rather, the underlying memory configurations. Okay. To Turing, the issue isn't whether the user defined an aspect of the mapping, defined an aspect of the mapping, but rather, whether or not the mapping will, can halt or not. In other words, what would the computer do with a partial function? If the computer stops, so then, from Turing's perspective, that was a successful running of the program. The fact that the program didn't do what you wanted, the fact that the mapping wasn't defined is not an issue. If you get a response from the computer, the computer, even if that response from the computer is error to Turing, that would be considered a successful computation of sorts. What he would call an effective computation. Right? Because by definition, it's not an input, it's not an output. The memory configuration, in his case, the tape. That, by definition, is the input before you start. And whatever's on it. And whatever the program does. And what is an output? The output isn't what we consider an output. And it's not... Even though, obviously, we hope, and under normal circumstances, is the tape after the computer stops, after the program is finished running. So, you know, we do expect, we hope, that the answer to our problem, the solution to the problem is on the tape. From Turing's perspective, that's irrelevant. Whether it's on the tape or not. It's only whether it stops or not. And then whatever's on that tape is the output. You go ahead and siphon through it and try to find this, you know, look for something meaningful in it. But that's your problem, says Turing. That's the user's problem. How to interpret those zero ones. How to interpret the memory. To Turing, the issue isn't whether the user defined an aspect of the mapping. Right? Or not. Partial. But rather, whether does the computer halt or not on the, on the original, when, yeah, when, okay, hold or not, when given the original memory, again, for Turing's the tape, configuration. If it does, independent of whether you're happy with what, with what it does, independent of whether the user is happy, is sad, is happy. Let's say happy, why not? Is happy with what is, what is on the the tape after what is, or in the memory or on the tape. If it does, independent of whether the user is happy with what is on the memory or tape. So then the, then this, then the running of the program, the running, the computer running the program, is an effective computation. If it doesn't, if it doesn't halt, then an infinite loop results, right, doesn't halt, an infinite loop results. And, um, and then what? And the, uh, computation does not yield an output, does not yield a final tape configuration, per se. Okay, good. Now, uh, to reflect these two situations, now we're getting to the word partially computable, and then we get to recursively innumerable, innumerable. To reflect these two, uh, uh, situations, uh, computer theory uh, borrows the notion of partial total. Um, if a program, uh, can compile, although in Turing's case it's not really compiling, but we'll call it compile, can compile, i.e. run on the Turing machine, okay, that's a little too far, on the Turing machine, then the code, the function, it is running, uh, program, function, or program, whatever you want to call it, uh, that is running on that, uh, program, uh, that was to run, the code one is called partially computable, because we do not know yet, we do not know yet, whether or not, uh, the code will actually halt on that, uh, uh, uh, halt, uh, when given the initial tape configuration. Okay. Now, uh, so that, so, and, and that's, that's an outcome of the unsolvability of the halting problem, right? Um, this is an outcome of the unsolvability of the halting problem, which states that no algorithm exists uh, when given a code and an input whether to, to, uh, to, what would I say, to declare? No, that's not good. Uh, to decide whether or not the code will enter into an infinite loop uh, with that, um, input. Again, just to emphasize initial tape or memory configuration. Okay, so that, so we can't know technically ahead of time. So that, so this whole designation of partially computable is, you, you, in other words, we won't know yet that it's going to hold. But if we could, in other words, if we could, if we theoretically consider that there are, obviously, there are codes that do. second, almost there. Okay. Uh, if If, in fact, the program will always hold on any, uh, initial tape configuration. Again, that's called input. Uh, how I did that. Okay. Input. Um, then the code, function, or program, all three of the, any of them, whatever, uh, you know, nuance you want to call it, uh, is, in fact, is, in fact, totally computable. Not just partially computable. But, uh, since that is the hopeful default situation, we always hope that our codes always run. But since that is the hopeful default situation, that the code always halts. Uh, computability theorists just call it computable. They drop the word total. Uh, computer theory just calls these codes, programs, or functions. Okay. Uh, computable. Without the, uh, implicit mention that they are total. All right, so that's partial versus, uh, total in computer theory. Now, how did recursively enumerable come about? So this is sort of a review from things that we said in the past. Um, how to, so recursively enumerable, how's that, how does that come about? Okay. Now, um, since every program, when it is compiled, uh, can, uh, will become a finite string of zeros and ones or any base arithmetic. Okay. Please remember, we do zeros and ones. Turing was trying to abstractly, uh, run, create any machine or talk about all machines. Uh, so to Turing, it could be any base and he couldn't care. Right? Um, uh, uh, even, uh, Turing, uh, basically, I mean, Turing even considered base one arithmetic. Well, our author does anyway, but I'm assuming Turing as well. Uh, uh, uh, our author, our author, uh, Davis, of the theory book, uh, our author, Martin Davis, uh, considers, considers, um, it says what? Uh, considers according to Turing. In other words, I don't know if he, if Turing actually said this explicitly. Martin Davis says that it's, that it has to be. Turing must have done this. But I don't think, in other words, I haven't seen in the literature where you could cite Turing having mentioned this point. Martin Davis, who's the modern founder of computer theory, assumes, uh, assumed that Turing did. But anyway, our author, Martin Davis, according to Turing, uh, uh, assumed that Turing did. Okay, I don't know. Uh, according to Turing, even considers base one arithmetic. Um, okay, so let's read that sentence again. Since every program when it is compiled will become a finite string of zeros or ones. What? Oh, man. Hold on. We got lost. Ha! All right. Or any base arithmetic. Uh, uh, so it got deleted anyway. I'm not sure what happened, but, uh, maybe that's for the, that is for the best, uh, for now. Since every program when it is compiled will become a finite string of zeros and ones, or any base arithmetic, according to Turing. Let's, maybe a shorter version. According to Davis Turing. All right. Uh, uh, I'll say short like that. Um, okay, fine. According to Davis and Turing. Um, and since a finite string of numbers can be read as, can be interpreted as, can be read as, interpreted as, a natural number, uh, not, uh, non-negative. Okay. So if you, you take your code and you compile it. And now it's stored on your tape and it's being read in or however you compile it. Right? You compile it into a machine, let's say. Actually, the tape will have an input. Although the tape could also, in this universal machine, the tape could actually have the code as well. However you, however we consider it, that's not going to be the detail, the issue. Um, when you're done, that is going to be all stored in, uh, characters of whatever base arithmetic you have. And that can only, that will all become a finite string. Right? You don't have an infinite, a code can't have an infinite number of characters. So that could be read as one really long integer or a natural, a natural number. So since every program when it is compiled will become a finite string of zeros and ones. And since a finite string of numbers can be read as, uh, uh, a natural number and since, okay, one last since, and then we're, then we are done for the intro to what recursively innumerable means. and since, and since no two codes will generate the same compiled bit string, then by reverse engineering counting all of the counting all of the natural numbers will in fact generate via an encoding Counting all possible codes, program codes, that can run on a Turing machine. Since the counting process is trivially recursible, and we're going to discuss that today, is trivially, which is going to be called primitive. Below, is trivially recursive, the codes, all possible, these codes, were called recursively enumerable. Okay, so there's that term. Okay, so there's one more time. In summary, you really need all of this to remember what's going on, but... Okay, the first part is... the first part of this paragraph was to emphasize the point, which is not 100% necessary for the definition of recursively enumerable, but it does support it, and it does elaborate on it a little bit. The first part is, you have a notion of a partial function or a total function in mathematics. And that maps values to values. And the function is that mapping. In computer science, in computer theory, you don't deal with the functions, per se. You deal with codes. Codes that map memory configurations to memory configurations. Both the input, the output, and the code can be viewed... It can be stored on a tape. And that in each part is a finite string of zero... Well, we call it zeros and ones. But it could be any base arithmetic. Whatever character set the tape can hold, right? Because the tape is the memory. The characters basically are just arithmetic digits. We call it A, B, C, D, E, but that's just a... I mean, is that 26, is it? 26 digit arithmetic, base arithmetic. You do the capital letters, assuming if you want to differentiate, and you're case sensitive. So that will become 52, base 52 arithmetic. You want to do normal 10, base 10. You want to do zeros and ones, base 2. And even base 1, right? That's Davidson in the name of Turing. And even base 1, whatever that means. We talked a little bit about it before. We'll have to mention it again for a very interesting quirk about base 1 arithmetic. But not for today. So the point is that that's what, quote unquote, what computers, what mathematicians call mappings. We call computers programming. We call programming. Fine. Now, so what does it mean? Does a partial function, the partial function exists to mathematicians. Does the partial function exist to computer theorists? Is there such a thing? Is there such a thing? So the point is that the computer's always got to do something. Right? You have an input. See, to the mathematicians, you have an input, and your function doesn't know what to do, etc. To a computer running, it's not a question of it knowing what to do, says computer theory. You have the input, and maybe your program can't handle the code. Maybe it can. Maybe it can't. But the definition of output, quote unquote, which really isn't output per se, but we will colloquially call in computer theory output. It just means that the computer halted, and then what remains on the tape is the output. So that even if it were a partial function, if the computer halts, you have an output. So the notion of being defined or not is not really relevant to computer theory. The question boils down to, will the computer halt or not? What is output? Output is whatever is on the tape after halting. Right? Okay, so that is the notion of partially computable. Right? Partially computable means that you have code. You can put it on the... you can compile it. You can write it on the tape, or sometimes hardwire it into the Turing machine. Turing had different versions, but basically the same thing. Either hardwire it into the machine, or you read it from the tape, the code. But if the code isn't just code, the code is, in essence, digits. Right? And the computer will do what it does. Now, we do not know ahead of time. We cannot know ahead of time whether or not this code will halt on the given tape. In the given tape configuration, the initial tape configuration, be called, quote, unquote, the input. We don't know. The unsolvability of the halting problem. Right? So, the best we could say is this code is partially computable. Meaning, it is... it's going to run. The computable part means the computer is going to run. Partially computable, because we don't know whether it's going to halt or not. If, in theory, you knew that the code always halts, then it's totally computable. Except we don't use the word totally, we just call it computable. Okay? So, that's partially computable versus computable. Now, since everything's on the tape, and since what you put on the tape is going to be a finite length. So, your input section, whatever those numbers are, are all finite length. Whatever your code is, it's finite length. You typed onto the tape a finite string of whatever you... Zos and ones. You know, in the compiled sense. Or whatever. Letters. Fine. If your code can handle it. So, then letters, whatever it is. But those letters are numbers, digits, whatever. In essence, every character is a digit in some larger base arithmetic. ASCII is either base 128 or base 256. Fine. Whatever it is. So, that in essence, every code being a finite string of characters, which really is, to Turing, a finite string of numbers. And finite string of numbers, in essence, is a single natural number. And since every... You can reverse engineer it. Okay? In essence. So, we can do that. Every... Because of the mapping is unique. So, every... Every... And he goes through it, but we're not going to go through it. The point is, every natural number can be read backwards as a code. In which case, by counting your numbers, you're enumerating, you're counting every possible code. And that's called recursively enumerable. All right? Now... Good. Wow. So, that took initially a little longer than I thought. Okay. So, the... And as... That's all to explain why another name for computability is recursion theory. Okay. But... So, the question becomes... The question, the exploration really, but we'll call it a question, that computer theorists then concentrate on, is how many features, characteristics, must this recursion have? Okay. So, first approach in the theory was to identify those basis and initial function or basis. They sometimes... Some people call basis... I think Martin Davis calls them initial functions and corresponding mathematical techniques that can generate the ability to recursively enumerate. And the success of that approach gave hope that, in fact, such a system of mathematics can generate computable functions. You can generate all computable functions. Well... Okay. Let's call it computable functions. Okay. Fine. This more simplistic approach was called primitive recursive. But Martin Davis and others before him showed that, in fact... Okay. Not the but. But, unfortunately, Martin Davis and others before him showed that, in fact, primitive recursive functions are a proper subset. of the computable functions. They had hoped that they could crystallize the essence of the recursion necessary, which would generate the programs and hence generate all the codes. Because then those initial functions, with their techniques, could become a reduced. .. A risk. Reduced instruction set for computation. Right? That's what they were trying to do. Basically, they were trying to figure out, at this point, what techniques does a program need? Right? They're creating... They created the computer. They know, in theory, that it will compute something. Now, the question is, what mathematical techniques must that machine implement? So, they started with this primitive recursive approach. Okay? And that's what they were hoping. But, in fact, it turns out not to be true. And the question is, so what went wrong? And is it possible to fix? Meaning, why wasn't it a good approach? That what we will be spending in the next class or two investigating is the strength of primitive recursive. And it's quite strong. It can do a lot. And you'll get a sense of it in a minute once we do a few practical examples. We'll go through a little bit of the notation. But the question is, what went wrong? You have some initial basic functions that are built-in. Your built-in commands are your initial functions. Your two techniques are the programming techniques that the code can use. And they had hoped that that should be sufficient to be the essence of the commands of any code. Right. The programming commands of any code. Okay? So, in essence, the computer theorists were trying to construct a risk-reduced instruction set. Meaning, a computer with the least amount of built-in commands. In hardware, it's called a RISC chip. A reduced instruction set chip. Or something like that. And the C could be the commands or chip. Anyway. Maybe it's commands. I'll look it up later. But when I send it to you, I'll fix that C. RISC chip. Well, it wasn't a chip because it's a Turing machine. But let's call it that. CPU. Right? And had hoped that the initial, what will be called the initial functions. And we'll define it in two seconds. The initial functions. Built-in commands. With the supplied techniques. Let's state them now. We'll define them in a second. Called composition and recursion. Those are the two techniques. Will be sufficient to generate code. Or to program code. But to generate code of any computable function. In fact, this is not so. But they thought so. All right. And then we'll try to figure out, as I said. So what went wrong and is it impossible to fix? All right. You know, let me... I'll put that statement here. All right. So here we are. Initial functions. So the initial functions... Again, for anyone who walked in, as it were, please type the word present. I'd appreciate it. And do that in general when you walk into class, as it were. And just save time at the end. So the initial functions. Very simple functions. Okay. N, S, and U. Now the U, although fancy name and probably the more fancy scribe, isn't actually something we really need. Not for what we're going to do for the next two classes. It's... I mention it because it's technically there to facilitate a detail in everything we're discussing. But because it's going to... The notation of using the U function will mess up everything we're going to write. Not just myself, but many theorists will tend to not write it when discussing primitive recursive functions. But you need to know it's there. It's sort of like, but not exactly like, when we said that... Well, you know, as the... Let's see the... Give me one second. Concatenation in formal languages or multiplication in arithmetic. X times Y, you just write it as XY in mathematics. Where's the times? It's there. But you didn't write it. Well, because it gets messy and we understand what you meant. In concatenation, you have a prefix and a suffix, and you're joining them together, called concatenation, into one word. Technically, the prefix has an operator. We... Traditionally, the dot... A dot operator. But you have the prefix dot the suffix. But we don't write it that way. We just write it as one word. What happened? Where did it go? It's there, but it gets messy. In regular expressions, which we're not doing this semester, they dropped all the. .. All the curly braces. Which... Because originally it was set theory. And sets have curly braces. Why didn't they... Why'd they drop it? Because if you were to use it practically, it gets messy and we know they're there. The point is, many times in mathematics, when you... When all of a sudden they realize that the technical definition has notation that gets really messy. And as long as you know what we're referring to, we'll just not write it. And we'll assume you know it's there. Okay? So, the U function, we will know it's there, but very often in our discussions. .. In the first... In the beginning, I'll mention it once or twice. And then we have a lot to go. So, we're not going to mention it, but it's there. And then I will mention, after getting the running start, I'll mention what the U was trying to accomplish. Let me just drink something. Okay, thank you. All right. So, n of x equals zero. We'll take that as a trivial command, but they're starting from scratch. Theory doesn't want to make any assumptions. They want to start from scratch. So, they started with... Again, again, just so we can understand. Their choice... Let me... I'll remind you one more time. Okay. So, let me just think. Right. The built-in commands... Okay. Okay. The... Their choice of initial functions and techniques were those that are necessary in order to accomplish... To generate the recursively... To generate... The recursively enumerable numbers. Which are just the natural numbers, but given a fancy name. Oh, I should have added that. To generate the recursively enumerable numbers that represent all possible codes. I should have mentioned that little detail. The RE... The RE... Label. That's the word. That represents all possible... All possible... Compiled codes. I'll give you one second. Um... Since... These codes... Are... Are now... Viewed as... Are now... Uh... Represented by... Are now represented by... Uh... Numbers... Um... The label... Are... E... Some... Sometimes with or without the dots. One second. Which... Stand... I'm... I know I'm repeating it. The word. But... I'm doing this because I need to add something in. I... Just... It's simpler this way. Just give me two seconds. The recursively enumerable... Uh... Became... Another name... For... All... Partially... Computable... Functions. Since... Again... Since the... Program itself... When compiled... Becomes... Dot, dot, dot... We've done this. I mentioned this two, three times already. Uh... Becomes a number. And since all numbers are different codes. Uh... And which is recursively enumerable. So this notion of RE... Became another name for all partially... Uh... Computable functions or codes. And... The proper subset of these... Of these numbers... Programs... That always halt... Which we call... Computable. Right? Which is really totally computable. We... We dropped the word. All right. So I'll just emphasize that. Totally. But we drop that. Computable. Um... Were... Then called... Recursive. Now... So... Their terminology may not be... May be really... Uh... You know... I hate to say the... Say it. But it's a little bit confusing. That's what they did. One more time on that... Notation. Uh... Labeling. Um... You have codes. Codes get compiled. They get compiled into strings. Strings of characters. But characters are only... Uh... Ascii. They're only digits. In... The whole scheme of base arithmetic. If you want letters of the English alphabet. So... So your base arithmetic is 26 or 52. You want to include punctuation marks, semicolons, parentheses, curly braces. Let's... Let's make it ASCII. So now you're either 128 or 256. So what? We don't care what base arithmetic you use. But since your code. Whether at the machine level of zeros and ones. Or even at the programming level that you're used to. High level programming. Whatever it is. In essence, it's just a string of letters. Which is really a string of numbers. Which all becomes a single number. And since each code. Uh... Each number will represent a different sequence of letters. Which represents a different program. Uh... So that's an interesting point. I should... I should have emphasized. When I said that it's... Each is a different program. Again. Turing doesn't care. I hate to say it. But we know it. Turing doesn't care what your program does. So we didn't mean that the code will do something different. Uh... Davis Turing understood. That your recursively numeral numbers. You have two different numbers. Which are two different codes. They may actually get the same outputs. They may be doing the same thing. You give an assignment of coding to the students. You know that nobody's gonna copy from each other. Of course not. No one's gonna do that. And we know that everybody's code is gonna be different. And it's gonna be compiled differently. So that means that you all... Each student has a different code number. Right? Representing the code. Representing the compiled version of their code. Each one's different. But yet they all... Obviously all my students do correct code. So they're all computing the same thing. What we call computing. Right? Input-output. The map's the same thing. The mathematical notion of function. They're all doing the same thing. But they're all having different codes. Every student has a different sequence of letters. It's never identical. Right? So that's a key point. Which I should've mentioned above. Give me one second. Let's throw that in. This is a rather long paragraph. I really should add. Okay. I'll just type it in. I'll have to edit it in. And clean it up a little bit. So it's broken up better. Note. Okay. How about that? Note. When we mentioned... The reverse engineering process. Okay. This. When we mentioned... Okay. This reverse engineering process. This reverse engineering process. Demonstrates that each compiled number... Each compiled... Each compiled... Each compiled number... Represents a different code. Is not referring to what the code actually does. But rather to the... The better word. Description. Let me think of a better word. But rather to the... Here one second. To the... Okay. Programming language. And the code's variable names that comprise the original code, which compiled into this number. We're not referring to what the code does, we're referring to what you wrote in your code. And that is different. And that is what recursive renumerals are addressing. Davis-Turing understood that more than one compiled number will represent the same underlying mathematical function. But what the code does is not Turing's concern. Right? It's only how you represent the code. How it's stored in memory. Only how one represents the code. And more specifically, how it's stored in memory. How the code and the input-output values are stored in memory. Okay, so again, please remember that. It's a very important point. I do emphasize this many times, but you have to keep that in mind. It's a very important point. It's not what the code does. That's why... and that, in fact, again, that is why, as we just said... that is why Turing is not concerned about the mathematical notion of partial, but rather what the computer will do in that situation. And, in fact, we'll consider the mathematical partial function as totally computable if the computer will always halt when running that... the code for that mathematical function. Okay? So that's a very important point. Okay? So that's a very important point. Okay? It's not what the code does. It's whether you can store it and whether you can run it. And what the computer does. You know, garbage in, garbage out. If the computer does something stupid when, as it were, when it's running code representing a partial function, don't blame the computer. Right? Garbage in, garbage out. Right? Diigo applies if the computer does not act the way the user would like when given a partial function, do not blame the computer. Diigo means garbage in, garbage out. Just to remind you what that word means. It's an expression. Diigo stands for garbage in, garbage out. You give it a bad code, a bad function, well then you shouldn't, you shouldn't complain what the computer does. Okay? Alright, so back to our definition. So we want to try to find, try to find a risk set, a R-I-S-C, not R-I-S-K, reduced instruction set computer or chip or command, whatever that C actually is. I'll look it up. But anyway, so we want to find as few initial functions and built-in commands as possible and as few essential techniques. So we have the following three mathematical functions. N of x equals zero. And its purpose is called the null function. Its purpose is just to always return... it's a constant function. It's a constant function whose sole purpose is to output zero. That's it. Doesn't do anything else. Successor. Simply the most simplistic addition, which only adds one to the original value. That's a not even regular addition, just adds one. Now comes projection. Alright, a little bit more to drink. Projection. So what does it do? It doesn't change values. You see, the others change values. Right? N of x takes x and returns zero. X can be five. Now you get zero. Successor. X was six. Now it's seven. But projection does not change values. Given a user-supplied list of values, returns one of them. So you had these numbers beforehand and you have it again. So what's its purpose? Okay. So let me just say what its purpose is and let's get done with. But technically it's there. The sole purpose, the purpose of projection was to access the parameter list of a function. Okay. So this was a philosophical... Okay. Maybe I should put this as a note because it's a little more of a discussion. It's not a major discussion. Here, let me say it differently. One second. What... What... What was the purpose of u projection function? Okay. What was the purpose of the... of the projection function? Let's say it... Let's say it... I'll put u in parentheses. What was the purpose of the projection function? So they... the computer theorists had a little bit of a philosophical problem. Given a... given a mathematical function, f of x1, dot, dot, dot, xn, right? Given a mathematical function, how do we have the right... or how do we have the right... How do we... What gives us permission? I don't know. What gives us the privilege, the authority to access the x sub i values? In other words, it's passed to f, but how can we use what's passed to f? We are not f. We are code after... We... After the command... After the declaration of the function, right? The header. Int... Factorial... Int... Number, right? What gives us the right? What gives us the right or authority to... to access the underlying x sub i values? Okay. The underlying... To grant this privilege, computer theorists included the projection function amongst the primitive initial functions. So it doesn't really do anything, but they thought they needed it. They thought they needed it because we may want to use some of the x sub i in our further calculations. What gives us the authority, the access, the clearance, the security clearance to be able to get access to the underlying values, the x sub i values? All right. That was their issue. Maybe you don't think it's an issue. Okay? I accept your opinion. But we're doing what the theory does. And that's what this one was. Okay? So this one's an interesting one, but we're not going to really use it that much. Okay? What was the purpose of the projection function? All right? Anyway, that's the truth and that's what the situation was. So let's continue. But the first two, 42 and 43, is what we need. Now we have two... Those are the basic functions. That's it. I don't think you could get more risk, RISC, reduced instruction set, than that. You only have two built-in commands. One that only returns zero. It doesn't return any other value. It's output zero alone. No other constant value. It will not return any other constant value. You can't change n. n is null and it only returns zero. No other constant value. Or no other constant function. All right, whatever. S of x is not even addition. All it does is plus one. That's it. Those are the initial functions. One of the essential techniques. So the essential techniques are the following. Composition. So mathematically, that's a mathematical notion. h of x equals f of g of x. Okay? So for example, to generate a constant k, which is not, again, not provided by the initial functions. Not explicitly provided by the initial functions. Right? So my comment here on line 42, only zero, but no other constant value. Okay? So what you would have to do is s plus one, s of, s of, s of, s of, n of x. You would have to use composition. With composition, using k s functions. In shorter notations, s to the power of parentheses of k of n of x. That's how they would write it. S to the power of k, n of x. It's not, it's not x squared. It would be f squared. f squared of x. As opposed to f of x squared. f squared. The former, the former being composition, well, of functions. The latter being algebra. Technically also composition of functions. But that's, but x squared is just simply algebra. And f, f squared of x is composition. f of f. f squared. All right? Anyway, composition. So we have s in the case of, for example, so the example to generate a, for example, to generate, okay. Say for example. So now we have the ability to generate constant k. We weren't able to, but now we are. That's composition. All right? So composition simply, simply pipes, you know, creates a pipeline. Where the output of one becomes the input of the next. Okay? The output of one function becomes the input of the next. All right. Fine. Now what's recursion? So recursion is what we think, but first of all, let's get a little bit used to the notation that's quite important. This notation is quite important. And again, for the next two, well, three tabs, but the third really is an important table to understand one of the functions in the second tab. But whatever. But when I send it to you, I'm going to probably only send to you the part that we did today, and we'll continue with it next time. So maybe, let me just think. Well, if you're interested, you know what I'll do? I'll send to you this file, and we'll use the same file over the next few days, next few classes. Meaning, yeah, next few classes. And let me just think for a second. And I will tell you in the email till where we covered. So that way your MCQ and lecture keywords could reflect that. And at the end, if there are any changes, then I'll send you an updated version, like I did with the CART paper. All right. Anyway, here we go. So recursion. Let's take the basic form. Okay. The basic form is the following. Let's say we want to create a recursive function, what we call recursion, REC. Whatever it does. What it does is not important. All right. So your base case is zero. And there may be an input X, right? So let's put a comment here. One second. Variable Y keeps track of the recursion level. Which level are you at in the recursion? Variable X is simply a user-supplied value. I would have changed it to more meaningful names, but I want to keep it to exactly how Martin Davis has it, so that way if you... What's called the mathematical notation, so that you could... What's it called? If you read the book, you'll be able to follow along. Otherwise, it'll look too different. So REC of zero comma X. You're at level zero. Nothing happened yet. This is before the recursion. You're not at any level. You're at the base case. You're at the ground floor. And what does... And you get an input X. What does recursion do? It does whatever you want it to do. So to reflect the fact that it could do whatever it wants, and it is... That we will just say H of X. H is... Okay. Within our context, H is already known to be... So that's not going to help you really, but... It's an important comment, but maybe I should put that afterwards. Note. Within our context, H of X, the point is we're trying to define primitive recursive functions. And you can't, in your definition, use a non-primitive recursive function, right? But, okay. But my... The point here, though, I wanted to say is that this is not a recursive step, right? It is a straight command. Okay. That's important. Your initial level zero... Notice... Level zero is a straight command. Level zero... Okay. Y equal to zero... Is a straight command. Right? Y is the variable. Level zero is a straight command. This is the recursive step. This is the recursive step attempting to define... Defining... Okay. No attempt. Defining the behavior of the function at level Y plus one in terms of that at level Y. All right. So you're at level... You're trying to define what will your function do at level Y plus one. X is being passed along for the join ride. I don't know if you use that expression. It's there for the ride. Hey, I'm going to take a trip to the cat skills. Oh, you want to join? Okay. Jump in. Right? You didn't initially or I didn't have to have someone with me. My friend says, hey, what are you doing today, professor? Oh, you're going to the cat skills.com. Yeah, sure. Join. Right? X is there for the ride. You can use... You can use it. You don't have to. You can use it. But it's there for the ride. X does not change. Y is what changes. The recursive level changes. And we're at Y plus one. So, we're going to generalize recursion by saying, what pieces of information do you have at this point? Okay? The three pieces of information you have at this point are the level number, which changes. The previous level's output, right? Which for us... Maybe I should... Give me one second. Good. All right. So, what we have here is the following. Level number Y. Level number Y, Y changes. Initially zero, and then increasing one by one. And that's what we mean by Y plus one. All right? So, it changes. The previous level's output. I write it typically changes. The reason is there's no obligation technically that it changes. Only that you pass it. Remember, one more time. Turing doesn't care what your code does. Turing only cares that you can run it. And so, the previous level's output probably will change. I don't think people would write a function code and waste their time for something that never changes, but you could. So, it's not a requirement of recursion that it changes, but it typically, practically changes. Okay? And that's the previous level's output. And then you have the user-supplied auxiliary variable, which actually came first. So, maybe I should put that in first. I don't know. Do it that way. Okay? User-supplied auxiliary variable, which is an input. I'm keeping it singular because we're only doing the, today, simpler form. So, you have X. That never changes. Level number has to change. Right? That's what models or mimics the recursion. The level always changes. And the previous level's output. Okay. So, we have to land. So, on line 62, just to read that one thing, and then we can stop here. And we'll continue with the rest next time. Which is that... That what? So, you have Y, the value, the output value, the level of the function. You have the previous level's output, and then you have a user-supplied input. All right? And then... So, then what do you do with it? Whatever you want. So, here also. Note, within our context, G of X. Well, it's not just X. G is also an already known function to be a primitive recursive. All right? And you do what you want with it. Again, what you do is not relevant to Turing. Do what you want with it. So, one last time, we'll stop here. Here, H of X provides you the initial value. G takes all pieces of information at your disposal during the recursion. Namely, what level are you at? The original initial variable supplied. And whatever the previous level's output is. And that's going to be the next level's output. And that's called recursion. All right? And we'll generalize it here. You can read it. We'll stop here. We'll clean it up, but you should know this as well. We'll generalize it to where the user doesn't only supply one input. The user supplied many inputs. But the same structure applies. Okay, it's the end of class. Sorry. If you didn't have a chance, kindly text at your present. And we will continue on Monday. Watch the next episode on Monday. Lucas. See you next time. Project п