Let me verify something. Okay. Before I mess everything up. The syllabus states that the third quiz is... when? Yes. It is April 29th. Correct. All right. So therefore, we have four more lectures. Okay. I knew something was odd when I saw such little amount of information here. So then we have April 20th as well. Okay. So that's a Monday and that's a Wednesday. All right. Now we're back on track. All right. One more time. So, oops, let me just do that again. Brightspace lecture. So for the next four classes, this Wednesday, next the week afterwards, and then the following Monday, the Monday after that, we're having four lectures. Since it's a rather short amount of material, we are not... Let me double check that. One more time. Yeah. Since it's a short amount of material, we're not having a review per se. It's a rather small amount of material. So I think it's close enough to the material that we can have the quiz straight out. And that's quiz number three. And that's on April 29th. Okay. At that point, on actually April 27th, you are done with the MCQ and lecture keywords. Okay. So you're done with lecture keywords and MCQ at the end of the April 27th lecture. Not done in the sense that you have to end it in that day, unless the syllabus says that. Let's double check. The syllabus says keywords are due... Where? No, no, May 10th. So you have a bunch of time. The issue is that... But you do have at the very end of the listing of the dates, you have the research. If you look at the form, the Excel form that to submit your data, you'll see that there's research. Your research refers to your paper. And so do one last lecture, as it were, as if you were giving your PowerPoint presentation and do an MCQ and lecture keywords on that. But other than that, we're done with any new material on April 27th. So April 29th is quiz number three. Now, what? Something's wrong here. Oh, something's wrong. Give me one second. Something is really messed up here. I didn't know I had this. Okay. I don't know why. It's not true what I have there. Something is duplicative and wrong. Give me one second. Fourth and sixth. All right. So these don't exist. I'm not sure what's going on here. All right. Now we cling to fourth and sixth. Well, yeah, it's the fourth. Yeah, it's the fourth and the sixth. Okay. So April 4th and the sixth. Hold on. So let me get this right. And then one second. I think I was using April, May. May 4th and sixth. Yeah. May 4th and sixth. And the 11th and the 13th. Good. The 11th and the 13th. All right. One last try at this. We'll discuss whether we'll have two last exams. No, we're not. When the final... Okay. So this is not relevant now. There we go. Now it's clean. Okay. So we have this new material. Then we have the quiz on Brightspace... Brightspace quizzes. The... On the April 29th. Then we're gonna... Oh, so now comes the important... Another important point. So we're gonna have... Let's work backwards. We have... What we're gonna do... And I find this extremely helpful. It just... It just is to make sure you get your grades handed in on time. We're going to have... The... The... Our final exam... Broken up into two halves... During the last week of our lectures. So on May 11th and May... May 11th and May 13th... So these aren't tentative dates anymore... May 11th and May 13th... That is what we are gonna call the final. All right? That is what we're gonna call the final. And as I mentioned to you, 50% of the final is the keywords for the entire semester. 50% of the exam, I hope I said, is keywords... And 50% of the final exam is the cumulative material. Those parts comprise the final. Okay? These comprise what we will call the final. So... Sorry. All right. Now, Brightspace does list a date for the final. Ignore that. But what I'm gonna do... Give me one second. Again, let's verify this. Where is it? Somebody... What you call it? It's open. Where are we? Here we are. All right. Let me go to the virtual... Not to the virtual classroom. Course home. Let me get for you... No, that's the wrong one. Oh, I won't be able to look it up now. All right. Fine. Give me a second. Okay. Whatever. The notes will say... Anyway, the point is... And let's take it quickly. I'm trying to log into CUNY First. CUNY First has a date set for what it calls the final exam. And instead, we are going to... As I mentioned to you, we're... So that is... During final week, we're not gonna have anything. No formal meetings or exams. If there's interest... If there's interest, I will post perhaps a lecture for you. Lecture in notes. If you want to listen to an extra lecture. And that will... But it won't be on any exam. And it won't be towards the grades. It'll be 1000% optional as an actual lecture. Perhaps I'll do that. If that's of interest, not now, but in May, I'll repeat this. And then... And we will... What you call it? You can tell me then. All right. Here we go. So according to CUNY First, May 18th, 8.30 in the morning to 10. 30 in the morning is what they call the final. Okay. So we're not gonna have it then. All right. So I'll say ignore... Perhaps let... If requested... If enough interest. All right. The fact is that the amount of videos that you have done... If you've done that, but there's no requirement to attend any class in the school except exams. So the amount of videos you did does satisfy the ability, I believe, to... To consider that a class. So even if we didn't... Even if you didn't want a class, we've done it. We've done this extra class. But the importance here is... Because I would like that you be around in case I have to ask you any questions, clarify whatever answer you gave on the final. Right? If I... If we were to do this final week and I have questions based on... When I'm grading, I may not be able to access or get in touch with you. And I just... In the previous semesters, I found that very important. The school requires us to hand in the grades within two days of the official final date. Something like that. And it'll be almost impossible to... To get in touch with you. Anyway, so this is the schedule. And so if you have any questions, email me. I'd rather not spend more time on this now. I'm glad we clarified that. Don't get me wrong. I think it was good that we did that. But there's nothing we need to do about this now. All right. Anyway, let's get on with the material. So what we're going to do today is clean up, complete some of the knowledge we've already done with primitive recursive. The first part I'm going to discuss, which has an important aspect, deals with. .. Just mentions induction and recursion. Give me one second. Yeah. Okay. And then we have here a lot of things. All right. All right. So we have discussed... If we had more time, I would discuss it more. But let me just explain that there are two types of induction. I make reference to previous lecture. I just mean this notion... This notion of... The notion that induction is directly related to recursion has been made before. But what I would like to do, we will summarize the key points of those discussions first. All right. Now, so there is something that... What's his name? That Martin Davis, the author, discusses... He suggests what's called coarsive values recursion. So the idea is the following. Let's concentrate on line six. The fundamental definition of the basic recursion utilized in the definition of primitive recursive... ...assumes that there is only one base case, and the recursive step is based on only one previous level's output. Right? Ackermann had... Well, had two variables... ...but only one recursive variable... ...on one variable's previous level's output. Okay. It's based on only one variable's previous level's output... ...as opposed to Ackermann, who recursed on more than one variable. All right. Now, but there are two issues here. So one is that it's one variable that you're recursing on versus two variables. And the fact is that we only allow one, which I'll mention also an interesting choice in a second. But the question is, you also only allow one previous level's output. So what about Fibonacci and other such recursions that require multiple base cases and number of previous levels outputs? Now, on line seven, I make a reference to something you learned in discrete mathematics called recurrence relations. Now, recurrence relations... One second. Sorry. Recurrence relations is something you have studied in discrete math. You can look it up. Basically, it's a fancy formula, but all it is is recursion. It's recursing on one variable, but you're allowed to have access to any previous levels output. Right? And that requires multiple base cases and a number of previous levels outputs. Right? Recurrence has access to all previous levels. You may not use them, but you have access. All right? So the question is here, then, what do you do with something as simple as... What do you do with something as simple as Fibonacci? Right? So Fibonacci... Where do I mention Fibonacci? Okay. No, you don't... One second. Consider Fibonacci, which requires two, access to two, to two previous levels outputs. Right? So Fib of N, or we say N plus one, equals Fib of N plus Fib of N minus N minus one. So here, we're going to produce level N plus one, and we base it on the output of level N and the output of level N minus one. How can we do that? We made the assumption that basic recursion, which is based on basic induction, that you only have access to previous levels output. So the answer that Martin Davis gives for this and all such questions, and as mentioned, recurrence even has access to more. And as just mentioned, recurrence relations have access to all previous levels outputs. Right? So that's a big difference than the official definition of recursion, simple recursion used in primitive recursive. So the answer that Martin Davis gives for this is based on the first tab of this workbook called... Not the first tab, great. Hold on. One second. Where is it? Ah. Okay. I'll have to update that. There is a... No, I have it here. Yes. It's not called... One second. No, we did that. Something's funny. Give me one second. What's happening here? Okay. All right. Will be okay. So I'll have to do this next time. That's fine. All right. So the answer to this will be based on... Successful identity. In fact... One second. The answer that Martin Davis gives to this is based on the discussion. Different lecture. Future lecture. Probably next class. Different discussion. Different lecture. Workbook on encodings. Yeah. All right. Martin Davis argues successfully that in fact Fibonacci and all such recurrences can be rewritten as a recursive, primitive recursive function. Well, recursive but also primitive recursive. If you do the protocol and start from n, s, and u. That only requires one base case and only one previous level's output. So how do you do this trick? Yet how can that be if Fibonacci and other recursives require more? The answer is that we can encode a fixed number of values into a single number based on any of the encodings presented in that workbook. So in that workbook... I guess next lecture. Because I want to do some other important... very important things. And I'm trying to clean everything up and tighten things up as we land the semester for the next two weeks. Anyway, so that'll be presented. And the point is you could always encode a fixed number of values into one value and rewrite the function that's primitive recursive on that one value, which is a composition of, but composition is allowed. In other words, the output is the composition of, but composition is allowed in primitive recursive. Fibonacci only requires two values. So the girdle pairing function, which we will describe with Davis's amendment. We'll talk about that. We'll do that. We'll do that. We'll do that. With the function. All right. Davis... Davis amends... Codal by... He subtracts one. And adding it back later to achieve this. Why he does that, we'll discuss. Not for now. Now, if you need more than two values encoded at the same time, then girdle's prime factorization coding will do nicely, again, with Davis's minus plus one amendment. Okay. And we'll talk about that. So we have that. Now, here comes the important part. And this is just... This is just a rewording of things we sort of know already. But here comes a serious question. Another serious question, but it's a little bit more... It's a tougher question. But divide and conquer... Discrete mathematics was called the master method or master theorem. Discrete math, discrete structures calls this the master theorem. Well, the complexity is called the master... whose complexity is called the master theorem. All right. So if you want to look it up. So, for example, divide and conquer and dynamic programming do not neatly fall under this. Why? Because... Because there, they're not a fixed number of outputs beforehand. There's not a fixed number of previous levels, outputs you need, but rather variable numbers. That's a really big difference. So, dynamic programming does not fall neatly into this discussion here. All right. Because you'll need a variable amount of previous levels outputs. If you look at divide and conquer, for example, is a ratio away. N divided by B. So, it's 1 over B ratio away. So, for example, N divided by 2, if you're at level 16, then you need level 8. But if you're at level 8, you need level 4. So, 8 from 16 is 8 away. But 4 from 8 is 4 away. It's not a fixed number of previous levels you need. It's a variable number. Okay. So, it's a ratio, for example. All right. So, the question is, how can you do that? It doesn't follow the standard notion, the official protocol that we call primitive recursive. So, Davis deals with that. All right. And he calls this that the key is that you have to have access to all previous levels. And that's what Davis calls the course of values recursion. And as I mentioned, the word course here means range. Like you could say a golf range or a golf course. All right. In older English, those two were the same term. So, he called it the official name is course of values. And the acceptable name is range of values recursion. Now, but is that primitive recursive? And Davis shows that it is. Okay. How is simple recursion based on simple induction? They're both extremely similar. They both require a base case. And they both require the outcome of a single previous level in order to obtain the outcome of the next level. Okay. All right. Now, what we want to do is add now a different type of recursion, which is related to all this and is still within the rubric of primitive recursive. And to that, we are going to borrow from what is known as strong induction. Because strong induction, if you remember from discrete mathematics, also has a base case, but allows access to all previous levels, outcomes, or outputs in order to define the outcome or output at the next level. It is precisely this model. In other words, the recursion we've been doing so far is based on what's called weak induction. But if you need access to all, which only allows you access to one previous levels output. But if you want access to all previous levels outputs, that's course of values recursion, also known as strong induction. All right. And that's what this course of values recursion is based on. So Martin Davis shows that using a fancy encoding, you can encode the same thing. You can encode a variable number of previous levels outputs into a single number and then rewrite your recursive formula so that it's now a recursion based on one previous levels output and also a very involved encoding, which includes composition. It is precisely this... okay, we said that. Now, so he uses, again, the Gödel prime and factorization encoding, which we'll discuss next class, and points out that we iteratively just multiplying this single number now. Okay, fine. That's not important. In this manner, any number of previous levels outputs can be multiplied in one by one. Ah. Okay. We'll discuss that again when we do the encoding next time. I'm just giving you a quick summary so you understand. So in summary, even coarser values recursion can be encoded so that it too will be simulated by a simple recursion that is only one base case and only one previous levels output. And this begs the following question, of which I do not have a good answer for. If coarser values recursion, which is modeled by strong induction, can be encoded or simulated by a simple primitive recursion, which is modeled by weak induction, then strong induction is just a modified but equivalent version of weak induction. So in what manner is one strong and the other weak? It turns out, according to Martin Davis, the two are the same. There's no difference. All you need to do is encode using, for example, the girdle prime factorization. It just makes it easier. But you don't have to use prime numbers. There are other factorizations too. It just happens to be an easy one to use for this purpose. Anyway, it turns out that you can encode a strong induction, which needs access to all previous levels outcomes, into another version, an encoded version, which only uses one previous levels outcome. So why is one strong and one weak? So why in mathematics is the simple recursion, really simple induction, called weak induction, and the coarser values induction, range of values induction, is called strong. I conjecture that historically, the mathematicians weren't aware of the approach Martin Davis proves and truly thought that the above two were that much different. All right, okay, the next lecture... Okay, next lecture implements... Okay, well, we'll discuss this a different time. All right, that's what I wanted to point out. So again, there are many variations to functions that seem to not... I should mention one more actually, which I have elsewhere. Maybe I'll include that next lecture. There is another interesting example of a defined recursion that seems to need to recurse, seems to need to recurse on two variables. You need to recurse by two variables from discrete mathematics. Okay, so you probably learned this in discrete mathematics. Rosen definitely discusses it. I know most professors in our department discuss it in CS220 from discrete mathematics. All right, so the question is, now, yet the textbooks seem to claim that simple induction could be applied. How can that be? Remember, if simple... if I claim, if Martin Davis claims, that simple induction is like simple recursion, simple recursion cannot recurse on more than one variable. Ackerman, it's real messed up. So how can they claim that a two variable problem, actually recursive problem, actually you can use simple induction? Okay, the real answer, which the books don't give, the real answer is based on similar arguments to above. Well, it's not even that, it's even simpler than that, is that although authors like to teach this example using two variables in the recursion, the fact is that only one of those variables is required. So it is consistent. Sorry. So it is, so this example, which I'll mention in a second, so this example also is consistent with primitive recursive tenets, primitive recursive rules, primitive, sorry, recursive rules. Fine. Okay, what is this example? This example is called the postage stamp problem. And I'll try to go over it with you next time, I have to see, again, there's only, there's so much I would like to do, and there's only four lectures, including today, of new material left. Schedule-wise, we don't have time for more. All right, this example is called the postage stamp problem, where a, where you, a person has x value, x face, x face value stamps, face value x stamps, and face value y stamps. So it's the value of postage is x, and then you have a bunch of other stamps whose value are y, show that a combination of multiples of each will generate all postage greater or equal to some value, some minimum value z. Okay, so you have the 8-cent stamps and 13-cent stamps. And then let's say I tell you that you, any postage for $1. 26 and more, you can generate by those 8 and 13. I don't know. There is some rule to figure out what that minimum value is. Anyway, discrete math books discuss that. So it seems to be variable. You have a variable number of x value stamps, and you have a variable number of y value stamps. So, and yet they seem to apply recursion and induction and all these nice things. So the truth of the matter is that it's true what they're doing. You could do it that way, but it's not required. The actual requirement is only one of those two. How can it be? Don't you need two? Don't you need two? So the fundamental understanding here is that the two values is only required to set up the base cases, which are not considered recursive steps. Not from... Remember, you may jump at that, because isn't it true? And when you program, you go down to the base case, and that's part of the recursion. So the answer is, but we go up. We go plus one from the perspective of computer theory, which only goes upward. And the recursive level starts at level one, level zero, but level zero is the base case. And that happens beforehand. All right, so you only need the two values in order to set up the base cases. But after the base case, only one of the face value stamps, one type, only one type of the face value stamps is necessary. All right, and if I had an example, I could show you very easily why that's true. So the bottom line is that even this case is also primitive recursive. It just uses... The point is, it's an interesting little thought here, in which that the base case isn't a recursive step per se. It's a precursor step in order to enable recursion to occur according to computer theory. Right? Level zero happens first, and that happens before recursion. You just... and you can do whatever you want, and you can use as many... you can use as many variables as you want. Whereas the base case can utilize as many variables as you want. The actual recursive steps can only use one. .. can only recurse on one variable. Okay? All right. So that's a very interesting little tidbit here. The relationship... namely the relationships between primitive recursive and. .. primitive recursive in the second and... and induction. All right? So the point is, this primitive recursive little risk reduced instruction set chip, right? A computer has very little techniques. N of X, S of X, successor. You know, null returns zero. S of X, add one. You projection, let's call it, retrieve a value from the list. Oh, a second. And retrieve a value from the list. And then... and just composition, which all mathematics has. And then recursion. Sorry. I think allergies. So then, the fact is that what you're called... You can get so much. Look at this. You can get so, so much. Yet... And that will lead us to the continuation of our discussion. Yet, it cannot get all possible computable functions. The difference being a variation on bound to minimization. All right. One second. Oh, I do do it in the next tab. Aha. So I do do it in the next tab. Let me just mention that here. Give me one second. The next tab implements cost of values recursion by employing the Gerbil prime encoding. Let me just look at it quickly with you. Oh. What happened here? Oh. I'm not sure what happened here. That's weird. One second. Something weird is happening and I'm not sure what's happening. It shouldn't be this big. That was really weird. I'm not sure what happened in Excel, but now it works. Beautiful. Okay. There exists. All right. So the upside down A is for all. And the upside down and the backward E, the rotated E is there exists. And mathematicians, when they printed books years ago, they didn't have many different letters and types and icons, so they had to reuse what they used. So they turned the A upside down to explain for all, universal, and it sort of looked like a U. And then existential, they rotated, flipped it instead of horizontally, vertically, and there exists. All right. What is going on here? The editing is really messed up. Oh, give me one second, please. I'm not sure. I must have changed fonts. One second. One second. One second. All right. I'll fix this. I don't want to waste your time. All right. This is all messed up. Anyway, the point is... Let me just show you. This is really messed up. I'm not sure what happened here. The layout and everything is really messed up, and I'm not sure what's going on here. The point is that there are some primality functions that Martin Davis, in his book, shows are primitive recursive testing composite. Does X divide Y? Does X divide Y? Boolean. So that is primitive recursive. And prime of X is the number prime is also primitive recursive. And then you have integer division. That's primitive recursive. And the remainder function, which is primitive recursive. So there are a number of prime-related and division and prime-related. The reason I include division here is because we asked, when we first got started our whole discussion, I put that in one of the questions that I wanted to tidy up today, which is that... So we had addition and multiplication. We asked about predecessor, and we answered that. And division is recursive predecessor. But you could use boundary minimization to actually simulate integer division and the remainder, because it's integer division, which is known as the mod function. All right. Then you could generate all primes primitive recursively. And finally, you get the girdle prime encoding. The girdle prime encoding would be as follows. So for example... All right. So let's write a little bit of information here just to give a quick example. The first prime... All right. For the... We need a... Again... What's that? What's his name? Davis always requires every value to be evaluated. So note... This base case is not prime. Okay? Now, prime of one equals two. Let's just do a few more so I can show you the prime encoding. Prime of two equals... One, two, two, three. Let's write the primes first. Five and seven. And then we have the prime of three, prime of four, dot, dot, dot, etc. Right? It keeps on going. This is prime of three. The third prime. And then the fourth prime. Okay? So let's say we encode the following numbers. What's going on here? All right. So let's say we encode two, four, six, eight. Okay. Two, four, six, seven. So... And you put in square brackets. That's the prime. So that's going to be equal to... The prime of zero... Prime of one... Raised to the power of... Well, this is the u function. Here... So this will be two. For the first value. And then times... Let's put this over here. That's going to be equal to the following. Ending skills. All right. Times... The second prime. Because you have a second value. And that will be the value four. Then the third value. And that will be six. And the fourth value... The fourth prime. Raised to the power of this last value. So using the primality, which is basically the fundamental theorem of arithmetic... Why does he use prime numbers? Do I say the fundamental theorem of arithmetic? All right. Encoding and decoding. All right. Encoding and decoding. Note... Primality is a good source... Is a good source for a theoretical encoding. They're not practical. Because as we're going to see, the value is going to jump up quickly. But primality is a good source for... Is a good source theoretically... For encodings. Well, actually... They are in general also. The RSA algorithm if you ever take Kent Bokelund's class. So in general... A good source theoretically for encodings. One of the properties is due to the fundamental theorem of arithmetic. Which states... That any... That any... That any... Natural number... Can be decomposed uniquely... Broken up uniquely... Into the product of prime numbers. In other words... It only can be done one way. That's called the fundamental theorem of arithmetic. Anyway, that allows for encoding and decoding strength. More decoding than encoding for five. Anyway, back to our example. Maybe I should do this example here. Let's do our example here. All right, and that's going to be equal to... Let's put it together. So... Prime... So what did we say? 2 to the power of 2. Then 2... Oh, okay. So... Let me see this. It'll be equal to 2... Well, it'll be 2 to the power of 2, which is 4. It'll be equal to 2 to the power of 4... The second prime, which is 3. 3 to the power of 4 is 81. And then 5 to the power of 6 is 15, 6, 25. And finally, 7 to the power of 7 is this. And now you've got to multiply them all together. A little note here. Now you've got to multiply these numbers together. And that'll be equal to... Product... Multiplying together all these numbers. I don't know if an integer will print out because it's pretty big. Oh, yeah, it doesn't. All right, it doesn't actually print out. It's too big. There's too many digits. Anyway, there is a number and that'll be an integer. Maybe I should change that. Let's see if I change that if it'll work. Oh, good. All right, so let's change that to a 3. Good. So we actually have a number. All right, so if we multiply those together, we get this number. And that's the actual encoding of those four numbers. That's a lot... It's not... This... Gödel-Primen-Koening is useful theoretically, but not practically, because look how big it grows. And this is just four numbers and small numbers. But it's quite powerful. Anyway, oh, yeah. Now, one more thing. Very important. Note... Davis subtracts one from this number. Please remember that. Davis subtracts one. In all of... In the Gödel encoding of what... If we have time next time, which is the... What you call encoding pairing function for Fibonacci, Davis will subtract one. Okay, the reason Davis subtracts one has to do with... Davis subtracts one because the Gödel-Primen-Koening and pairing function, Gödel's pairing function, can only generate positive numbers, and Davis's theory requires totality. So you have to be able to involve the number zero, whether you like it or not. Alright? That's why he does it. Davis subtracts one from this and from... from the pairing function, which maybe we'll do a different time. Alright, so using all this, you can encode any number of numbers, and the techniques are all primitive recursive. And that's the point. Finally, Davis shows that using this encoding, you can show that course of values recursion is in fact primitive recursive. And the technique is here. Basically, all you're doing is, every time you're at a different level, provide the primitive... the Gödel-Prime encoding of all the previous levels, which is only a single number, and then tack on, just multiply the next prime raised to whatever that next level's output is. So you're always passing along one number by increasing, tacking on one more. In essence, in essence, Gödel-Prime encoding will encode all previous levels' outputs by raising the prime... at the current level, the prime generated at the level, current level... no, no, yeah, to the power of the previous levels' output. And tack that on, stick it on, to the Gödel-Prime encoding past the long so far, by simply multiplying, as we said, this powered value by the previous Gödel-Prime encoding obtained. So you could always just tack on one more number by getting the next prime, which is a composition of primitive recursive functions, raise it to the power, power is primitive recursive, of the current level's output, which then you have now all previous levels' outputs, and you can access any of those levels, and that's how it works. So strong induction and... strong induction, which is cost of values, recursion is the same. In essence, you can, by using encodings, you can replace it by a simple induction, simple recursion, which only uses one variable, and only one previous levels' output. All right, good. So that's the... that was the discussion on the importance of prime numbers. I'll have to clean this up. I'm not sure. It just appears really weird. I'm not sure what happened, and I'll try to fix that up later. Now, another little discussion I want to discuss about is that we mentioned last time... we mentioned last time that... that what? that the variations on minimization is what's missing mathematically between primitive recursive, computable, and partially computable functions. Namely, that primitive recursive uses bounded minimization and computable functions uses unbounded minimization if you knew that it will halt. And partially computable are... uses the primitive recursive functions plus this extra technique called unbounded minimization even if you don't know whether or not it will halt. But whether or not you'll find it, the bounded... unbounded minimization. Okay. Now, how... that's in terms of mathematical techniques. What about in terms of the programming? Right? This is all mathematical, but at some point, Davis realized you're going to need a programming language. So the question is, what is the minimum... we talked about the minimum chip, the minimum amount of arithmetic functions that the hardware has to support. What about the minimum types of commands that the... that the programming language needs to support? All right. So basically, the base cases are very simple. The base cases are basically assignment and plus one. Right? If you had assignment and plus one, composition is assignment, as I mentioned, and then recursion can be replaced with a for loop. So if you remember here... And I'll clean this up a little bit for you. We've defined this before. This is our general recursion here on line 17 and 18. On line 17 and 18, you have the general recursion. So you're only going to recurse on y. X1 through Xn are there for the ride. There may be variables that you consider, but they don't change. You cannot allow them to change. Only one variable recurses. Only one variable changes. The value changes. All right? And ironically, that value is the level number, and it only changes by one. Anyway, that's primitive recursive. And it's based on some other function using the information you have. So on line 18, you have the three categories of information are in order, level number, previous levels output, and the auxiliary passed along variables. Auxiliary... Something like that. Passed along non-changing values. Or variables, if you want. And they're not allowed to change. So that's the three pieces of information. And h and g, do whatever you want. That can be replaced by the following extremely simple for loop. Rec equals h of x1 through xn. That's an independent call, as we just said a half hour ago, somewhat 15 minutes ago, that the base case is not yet part of the recursion. Right? Not for us. For programming, you involve it. You involve the base case in the recursion. In the primitive recursion, you don't. The base case is a prerequisite, is a precursor. It doesn't yet enter into the loop called recursion. That's only from level one onward. So that base case can do whatever it wants, and h does it for you. And then at the level y or y plus one, the recursive function uses g and combines only the pieces of information that's available to it. Again, the three pieces of information, level number, previous levels output, and any auxiliary variable pass the log. Write it that way. Auxiliary variables. And these are non-changing. Okay? Non-changing values. Good. And then you could take a look at it and read it on your own, but it's a very simple for loop. So putting this together, Davis proves that using the for loop, assignment, n of x, which is assignment, and n of x, which is assignment. Well, increment plus by one are sufficient to code. That's it. For loop, assignment, and increment by one. And that's all you need. For loop, assignment, and increment by one. And that's your reduced instruction set from a programming language perspective. And you could code any primitive and recursive function. The question is, what is missing from accomplishing any partial and computable function from a, from a, well, from a programming perspective? Wait, one second. Okay, in the mathematical definition, what was missing is unbounded minimization as a search process. In order to understand Davis's answer, we need to understand the essence of a loop in programming languages. Okay. So this is interesting. And this gets into something which, hopefully, I don't know. But if I were teaching programming languages, I would mention. Okay, so there are many different types of loops. He specifically chooses a for loop. But you have to understand something, which is that, historically, the for loop is not, it's very similar to, but is not the for loop that you use today. So I want to understand, and specifically, Davis means exactly, precisely the for loop, as opposed to the while loop or anything else. But the problem is that what you call the for loop is not what Davis calls the for loop. So let me just quickly review with you what's a loop, and then you'll understand the distinction, and then we can move on to give Davis's answer. It's what's missing. Okay. So four components to any loop. Okay. Any code loop. Any code loop. First of all, there must be a control variable that's initialized. Right? This keeps track of the iteration you are up to in the loop. Second of all, you must test this control variable. This component makes sure that your iteration is eligible, and whether you should repeat or not, or exit the loop. We call that break. We call that to break. Now you have the body of the loop. Action to be repeated or executed from, found in the center of body of the loop. And finally, the control variable must change, because if it doesn't, you'll have an infinite loop. Because if the test was true the first time, the test will be true the second time. Right? And that's... so that has to change. Change to the control value of this must occur in order to avoid an infinite loop. All right. So these are the four components of any loop. And there are different types of loops which use these in slightly different orders. So consider Java. Java has a for loop. You do initialization, condition. Now the update doesn't happen until after. Right? Update happens after. Body of the loop. Okay? Then you have a while loop, which you initialize, and then while condition, code to be executed, and change the control value. And then you have a do while in Java, right, which you initialize, and you do the code and change the control variable. And the test condition involving the control variable is at end of loop. Right? And you either exit at that point. In the current form, the for loop accomplishes the same as the above while loop. Okay? Okay? Your for loop is very... is exactly the same thing. Right? You do the update after the code to be executed. You do the initialization as a precursor. You do your condition. You do the body and the update. And then you go back to the test thing. But that's exactly the while loop. It's just a neater package. But originally, it was not the case for for loops. The for loop was not the while loop. The for loop was to simply iterate through a sequence of numbers. For i equals 1 to n, do print i. Just simply iterate through a sequence of numbers. The initialization is i equals c1. The control is i less than or equal to n. Which is just built into 2n implied. The body in this case is to print i. All right? This... You do have something like that in your for loop over a collection. Now, philosophically, the current for loop is... The current for loop, the object-oriented for loop. All right. The c++, c and c++-based for loop is similar to a y loop in that the condition and test on the control variable is, in essence, searching for the instance where the condition is met so that the loop will stop and prevent an infinite loop. So here, this is the fascinating word. And we mentioned that before. Let me show you where. As unbound demonetization, in essence, is a search process. That is what the while loop accomplishes. The for loop does not. Our modern for loop does. Right? The original for loop was only for repetition. And not as a search procedure. Your for loop is searching... You see, you think of it because... you think of it differently, but only because you're more interested in the body. But from a theoretical point of view, it's the control variable that is what's really going on. The body serves the control variable. You can view it as if the control variable serves the body. In the for loop... that's... in the old for loop, the control variable served the body. But in a modern while loop, the body is providing a change to the control variable to test and search when the condition is met. Anyway, the bottom line is, the original for loop, line 79, was only for repetition and not as a search procedure. And not as a search procedure. It is precisely this for loop that Davis will use in the above implementation of primitive recursive functions. What is missing in primitive recursive to accomplish all partially computable functions is the while loop, which is searching. Just as unbounded minimization, we do not know actually when and if the control variable condition will stop. That is fascinating. So, to accomplish, to code, all partially or computable functions, one only requires, using the Davis for loop, the previous, how would I call it, the repetition, the repetition for loop, and then... Oh, okay, let me throw it in there. Okay, let's do that first, actually. Let's put this one first. I got it. So you need the while loop, which performs searching, the for loop, which is iteration or repetition, of the body assignment, which you always need, set, you know, storing the memory, and increment by one. There's one little piece here. Good. And this is sufficient without any partially computable function. Fascinating. Anyway, as I said, what's his name? It's a genius. Davis. Davis was a genius. He is. He's still alive. I think he's in Berkeley. He was an excellent professor. I had him for a year. All right. So that's the point. It's more than just philosophically. It's a little deeper than that. It's also from a program language perspective. That's all you need to be a minimal programming language. All right. Now comes a fascinating discussion here, which I'll summarize in a snapshot. And I think I've here all the... We'll go over some of these in a second. All right. You could read this, but here's the key that I have. We had a question way back, which I've been avoiding to answer, but it's really, really significant. And here it is. You have your... It has to do with time complexity from an analysis of algorithms point of view. Okay? And the question we raised is the following. The base commands, which for primitive recursive or now even for partially computable, are still only n, s, and u. Okay? Setting aside and returning the value zero, constant time. Plus one, constant time. Retrieving a value from an array list, constant time. Right? Random access memory, go to the address, constant time. Good. Now, we then did primitive... When we started off, we did primitive recursion. And from successor, which means plus one, we did addition, right? Addition is recursive successor. Good. And that just requires, we said above, or in previous tab, that requires a simple for loop, a simple repetition, a simple for loop. That's order n. Good. Now, multiplication is recursive addition, repetitive addition. Right? A certain number of times. So that's also a single for loop. And presumably, that is order of n squared. Beautiful. Now comes the power exponentiation, exponents, and that is recursive multiplication. So one more for loop in our nested for loops. And now power is a for loop on multiplication. Multiplication is a for loop on addition. Addition is a for loop on successor. Which means that if we didn't have, if we go back to the basics, which is just n, s, and u, and we use this for loop, right? To implement recursion, an assignment to implement composition, which we mentioned in the previous tab. So then it should come out that the complexity of the power function is order of n times order of n times order of n because it's a for loop of a for loop of a for loop of adding one. An assignment and perhaps somewhere assigning zero. The problem is then we conclude that it's n cubed. But there's a simple argument to say that's impossible. That cannot be correct. Why can't that be correct? Because if all you're doing, right? If you're unraveling the recursive power into multiplication, we're unraveling the recursive multiplication into addition. We're unraveling the recursive addition into plus one. We're going risk. Reduced instruction set. We're going real basic. So then all you have is a triple for loop of plus one. If all you have is a triple for loop of plus one, and you claim that the time complexity is n cubed, then the output should be a value no greater than n cubed. Because all you did was plus one n cubed times. But you're claiming that the output is exponential. In which case you must have added plus one a power exponential amount of times. What went wrong? Right? You have a triple for loop, which you claim is n cubed. But all we're doing is adding one. Assignment doesn't change the value. And if you're assigning to zero, all you're doing is making it less, not more. So how the heck? How did you ever get exponential? You had to iterate plus one an exponential amount of times. The complexity had to be exponential. But yet our basic understanding of for loops should have told us that complexity is n cubed. The answer has to do with the fundamental notion of induction and recursion, which we have been stressing. You must always start with the base case. The base case is not part of the recursion, but it sets down an important groundwork, which is who is the only variable to change? What happens along the way when you get these multiple for loops You're iterating not based on the initial variable anymore. You're iterating based on any number of values picked up along the way. If you do that, so then you are going to... That's not analysis of algorithms. If you were to unravel the values you picked up along the way, you would realize that, in fact, you are doing exponential amount of adding one based on the original variable. So that's quite important. Now, one last question and we'll conclude this. Why doesn't analysis of algorithms discuss base one arithmetic if, in fact, base one hardware can implement a Turing machine? And David spends two chapters showing that. The summary of what I wrote here is that analysis of algorithms differentiates between length of input and value. Remember, k bits, if that's the length of input, can generate up to the value two to the power of k. In some sense, that was part of the problem of the previous tab, but we don't need to get that deep. But the bottom line is length and value are exponentially apart. But in base one arithmetic, they are equal. Now, that is not a problem for Turing. Turing couldn't care less about... not really. He cared in the long run, but that was not his job. He didn't feel that was his job. He left that for the next generation. Turing was just showing what's possible, not what's efficient, right? Effective doesn't mean efficient. Effective just means will it ever halt? Will it ever get done? But in order for analysis of algorithms work, you can only start with base two. You can't start with base one, because in base one, the length and value are the same. Okay? And what I mentioned here, I give you another fascinating example. You can read this on your own. It's a simple example here on this question number two, related to why a triple. .. why for loops aren't as easy as you think all the time in terms of complexity analysis. I give a trivial example of printing out values. In one case, it's order n. In the other case, it's exponential. And as I mentioned to you in the... to answer this previous question, which is just because it's nested for loops doesn't always mean it's polynomial time. All right, that's the end of class. And again, I will edit these notes and email it to you later. Please, if you didn't have a chance, text that you were present and we will continue next Monday. Please also, in case you walked in late, the first tab is an admin of critical information regarding all exams and final exam information. Make sure you read that first tab on admin as well. All right, I'll display it while... Oh, here it is. Remainder... It's called Remainder of Exam Schedule. All right, please read it. And if you have any questions, do not email me a Q-mail. Why don't you wait till the next class? You have a lot to do to catch up on. I assume things will work. Anyway, that's the end. So, thank you all and we will continue next... Oh, yeah. So, if you didn't have a chance, thank you all. Thank you, everyone. And please text your present and we will continue on Monday. Okay, bye all.