April 20. April 20. Hopefully they find them. Let's see if they find them. All right. April 20. Let's see. Find it. Yes, it did. Okay, good. Share April 20. Okay, give me one second. Okay. All right. So, what we'd like to do... Well, the previous set of notes had an admin description of what's going on for the next bunch of weeks, just as a quick reminder. Let me just read that quickly to you and summarize it. All right. So, today's the 20th, then the 22nd and 27th. So, this Wednesday and next Monday, we still have new material. And that's through a week from today. April 27 is the last lecture of new material. All right. That would mean that except for the one extra entry in the MCQ and lecture keywords file, that of research, meaning the paper, if you were signed the paper. So, then you have to enter in as if that were a lecture and do keywords on that and MCQ on that. But beyond that, besides that, after this Monday, a week from today, there won't be any more electric keywords or MCQ. Then, a week from this Wednesday, which is April 29th, on Brightspace quizzes will be quiz number three. And of course, that's no virtual classroom. All right. Then, the week after that, which is the first week in May, namely May 4th and May 6th, is a week of review for the final exam. And it's done in two parts. The Monday, May 4th, we're going to have an open session to answer any problems and solve them for you from any of the quizzes. And of course, they'll be typed into notes, and those notes will be sent to you. All right. But the focus on May 4th, that Monday, is to go over quizzes. Then that Wednesday is open to anything from the semester, cumulative material, keywords, if you're interested to go over, or more questions from the quizzes. That's totally whatever you'd like to be answered on May 6th. That's open for any discussions. And again, there, any notes we type up, I hope to email you that day. All right. Now, then, the next week, the second week of May, May 11th and May 13th, we are breaking up our final exam into two parts, which is that on May 11th will be keywords. And both of those parts are 50% value of the final. So, on Monday, May 11th is the keywords. And then, May 13th is cumulative. Both of them are on Brightspace under quizzes. And that's the category that they found. And it's during the same time slots throughout the whole semester, in the same manner that you've been taking all the other exams till now. CUNY 1st has another date for us on May 18th, which we're not going to hold. If there's, again, in May, when we're doing that week, the second week in May, if there's enough interest, I could prepare for you an optional lecture, which I could post asynchronously that you could listen to during the final week if you're interested. But that will not participate in any grade aspect, homework aspect, assignment aspect, or anything. But if you're interested, I could do that. But anyway, so that is the upcoming administrative portion. Okay, what just happened? Here we are. Okay. So, the topic for today is called encodings. Sorry. So, the question is the following. Primitive recursive has seemingly, although simple, but a little bit restricting assumption. And namely, that you could... basically, it seems to have only one base case, just like simple induction. And it also only allows access to one previous level's output, similar to simple induction. The question is that there are a number of very straightforward, not too complicated arithmetic functions, such as Fibonacci, which is here on lines four through six, which need access to more than one previous level. So now, the... the... so the way this is done, which Davis does, and Godel... I don't know if he... well... It's hard to say that Godel, in the late 18... early... really early 1900s, was concerned about this angle. But anyway, but he was concerned with the concept of how to take many numbers and to encode it by one number. And we're going to go through many suggestions that were given over the years to help out the theory. So anyway, so... but Davis, certainly, Martin Davis, our author, certainly was. And also the founder of modern computer theory. So he was certainly concerned. So what he created... what he used is one of Davis... in this example, Fibonacci requires access to two previous levels numbers. And the idea is to create an auxiliary function, which will only pass one number at a time. That one number will encode the two Fibonacci values. So now the... please understand this new function, the encoded function, which passes one number at a time, that output is not Fibonacci. So you need to encode the base cases to one number, let this new function do its trick, and then you need to decode at the end to find out the true Fibonacci value. Okay? So the encoded function will not give numbers that are meaningful to you. But just know that it's proven as such that they encode all the numbers you need. Okay. Now that's the case of two numbers. And then you have what's called course of values recursion, where... which is based on a strong... what's called strong induction, where you're allowed access to any previous levels output. And the question is, how do you do that? In fact, that certainly also violates the notion of primitive recursive, where you're only allowed access to one previous level. So using prime number encodings... And we'll talk some... we talked a little bit about last class. We... in the last class, the notes gave you the primitive recursive proofs that using prime numbers still stays within the realm of primitive recursive. And... anyways, we're going to show you Gödel's... Gödel or Gödel's... people pronounce it differently. Gödel's... what you call it... prime number encoding, which will allow you to encode any... almost any number of numbers. And we'll see the asterisk, the exception in a moment. But it'll allow you to encode all these numbers into one number, and it's the same idea. So you'll rewrite the... you'll see... you'll rewrite a simulated function, which passes the encoded message from level to level. That's only one number, one message, one finite string of zeros and ones. But internally, it's going to know how to use it and... and... and... and... and modify it. So that now it encodes the one extra level, passes it along. And when you're done, you decode the message and you get the result you need. All right, so let's go through these encodings and talk about it. So the one that Davis uses for the Fibonacci is... is called the pairing function. And the pairing function... So now we have to mention... I mentioned this here. I could... I really... it really applies to everything. The... the issue has to do with... Here in this paragraph, I mentioned that when you're dealing with cryptography or compression algorithms, although this is not... this is not compression, because the... you're actually going to get sometimes a bigger number, which is going to require more memory. But the idea is that when you change the values and you represent them by other values that's supposed to represent the first values, can you obtain the original message exactly bit by bit as when you started? So that was a concern that came up in imaging, where scientists realized that the human being really is... the brain is pretty forgiving. Wherever you are right now, take a look at the walls. There may be pictures and things on the wall, but generally the walls and the ceilings will have a... a... a straight color. Okay. You may not have done that. Maybe some fancy wallpaper, maybe you have other things, but you've been in classrooms for sure, and many rooms where each wall... one solid color. Now, that's not actually true. Because whoever is sprayed or painted the wall, didn't put the same intensity of spray at every dot on the wall. So that if you took a high powered lens, you will... you'll be able to realize that the colors really are different. So that if you had very sensitive equipment, and you tried to store digitally the picture of the wall, you won't have the same values. They'll be very similar values, because they're very similar in color, but they won't be the same values. The brain, though, is forgiving. And it sort of says, no, I... the brain understands this. It's... it's... it's... it's helpful to you to simplify, the brain says. Let's consider it one color. So they use that fact to create a compression algorithm that does the same thing. And that's called loss C. Loss. There's something you lose. But you... but psychologically you gain. There's a loss and there's a gain. The gain is that by simplifying, you can compress the data even further than you can imagine. Right? If... if you have to store every dot, and if you knew that every dot had a slightly different intensity of color, because the... it's... the paint was unevenly sprayed on the wall, although very close to... sorry, to each other, but not exactly the same, that's a lot of data, because each point would have its own number of intensity of the color. But if you take this loss C, L-O-S-S-Y, so the first version is called loss less, that there is no loss. But this is called loss C, which means that you're willing to accept the fact that we are going to allow you to compress this data, because we're more concerned about saving memory and compressing it. And the image that we're going to end up getting is going to have the wall one color in the example that I'm giving you. But that's not the true data. But you don't mind, because the... your brain sort of is going to do that anyway. Anyway, that was behind loss C versus lossless. The most famous and first, I think, encoding that was popularized for sure, that did this was JPEG. JPEG imaging, and I think it's cousin MPEG, eventually incorporated this idea, that. .. let's save space, because imaging takes up lots of memory, and the brain is not really interested in every little, at the microscopic level, how much... what color exactly it is. If it blends into the background, it blends into the surroundings, so the brain's willing to consider it as if it's one color. So that's a very important issue whenever you do encodings and decodings that are similar to compression. The issue here is that we do want the exact. So ours are going to be lossless. All right, here comes the pairing function. Godel, as I said, he wasn't really interested because the concern didn't exist yet. Godel, Gödel wasn't interested in primitive recursive issues. He wanted to know how to count. He wanted to take... Cantor himself and Godel, continuing, wanted to take this idea because they wanted to know how to count. And the question is, how do you count items in multi-dimensions? And you want to prove that it's countable, and that's important for the church during thesis, because if it's countable, then the computer could compute over that entity, over that structure, the three, let's say, three-dimensional structure. Right? So you need to encode multiple numbers into one number to show that you can count them. Anyway, so this takes two numbers. Now, on the left, I have four. It sounds like four numbers. I actually give an example. OK. So the function z, and it's written traditionally with these triangular brackets. So you take the two numbers x and y, and they encode into one number z. The function uses a spin-off of odd and even. The y, the second number, governs the odd factor. Two times y plus one on line 24. In parentheses, two y plus one is always odd. And the even factor is not two times x, but two to the power of x. OK? So it has an even component at times, and then you multiply the two, and an odd component. OK? Now, the question is, and that's what Davis suggests. Well, that's what Godel suggested. Now, Davis had to modify this, and I want to explain why. Because let's say, again, we're dealing with the natural numbers, right? Because we're encoding memory configurations, and the simplest configuration is all zeros, which is the value zero. All, you know, zero, the bits all contain zero, and that's the value zero. So let's imagine, in other words, what would z of, what would, well, OK. Let's say z was equal to zero comma zero. What's that equal to, right? So that would be equal to two to the power of zero, times two to the power of two times zero plus one. To be a little bit more explicit, I'll put here once the asterisk to show that it's multiplication. What am I doing here? Why doesn't it like it? One second. I don't know why it's playing around. All right, let's do it this way. OK, fine. All right, so two to the power of x times, right? So we're going to multiply this. So that's going to be equal to, well, two to the power of zero is one. So that's one times zero plus one, which is equal one times one, which is equal to one. Long-winded. But I just wanted to go through this example more explicitly. So let's put some spaces. A little bit easier to read. OK. All right, so it's equal to one. Good. Now, in fact, let me put... OK, I'll leave it there. Fine. Now, Davis was concerned by that, because you remember, that's Godel. So Godel's happy. Godel is done. But Davis was concerned by that. Why was Davis concerned by that? Because the fact is, we mentioned that Davis needs that every memory configuration can be mapped under whatever you're doing. So that if you're mapping two numbers, it has to be a total function. And you need to encode and decode. So the Godel function can only decode the values one or more. Because the Godel function cannot generate zero. Right? Because we just did it. Any number, if you take any number for x or y greater than zero, then that's going to be even bigger. But the fact is that Davis requires that it be a total function. So the decoding process has to be able to decode zero. So what Davis did is he subtracted one. He said, you have to take this and subtract one. So that's what I call, we'll call this z of g for Godel. But then we have z of d, which is this minus one. Then he had to subtract one. So I gave an example here. Let me work this separately. Let me move it over. So z is going to be equal. So you have x comma y, which is three, here, three, zero. Okay? And that would be equal to two to the power of... Sorry. Two to the power of three times two times zero plus one, which is equal to eight. Right? So that, for example, that. Okay. Let me see if I could throw that in here. Good. Yeah. I did do that wrong. I did do that wrong. One second. Let's do it like this. All right. So that is this. So therefore... Give me one second. Maybe I should do it like this. Yeah. So x equals three. Y equals zero. Three to the comma zero equals eight. So that is Godel. But Davis is going to subtract one. All right. So that's key here. Davis is going to subtract one. Emphasize this. Let me box it. Good. All right. So that's the story. That's the situation. All right. So that's the pairing function. It's pretty straightforward. It's two to the power of x. And then the y component becomes an odd number. And then you multiply the two. And you multiply the two. And then you get the encoding. Except that you now... But then you subtract one. So that way, there is a value... There is a purpose to the value zero. The value zero will be mapped to x equal y equals zero. Okay. So to Davis, z of d for Davis is going to be equal to zero comma zero, which is equal to zero. Okay. Let me emphasize that here this way. To Godel... To Godel... Let me insert here. To Godel... What just happened? To Godel... 0 comma zero equals zero. But to Davis... Oh, okay. Let me write z of g. The books don't say z of g and z of d. I'm just doing that because I think it'll make it a little easier for you right now to emphasize what's going on. And that'll be equal to one. All right? So that's another example. All right. So that's the parent function. Good. Now, Godel... That's for two numbers. Godel then wanted to do any numbers, which is what I showed last time Davis used for... Maybe I'll include it here as well. But anyway, I'll give an example. But Godel... Davis uses it to encode course of values where you have access to any previous levels output. All right. The idea is the following. So let's take... It's easier to do the example than to write it in words. Let's say we have four numbers. Fourteen, one, seventeen, and two. All right? Fourteen, one, seventeen, and two. Now, the coordinates are not, you know, one, ten, a hundred, and a thousand. In other words, the dimensions. But rather, the base are primes. The first prime, second prime, third prime, fourth prime. All right? So prime one here means prime number one, the first prime. Maybe that'll be easier. Prime number one, prime number two, prime number three, prime number four. Okay? I'm gonna do that again. Prime... I'm gonna put the word number. So you understand what I'm trying to do here. Good. All right. Now... So the first prime is a two. The second prime is a three. Third prime is a five. Third... The fourth prime is a seven. All right. So then what you do... And this is gonna be pretty big. All right? I hope this example... Did I work it out? No, I didn't work it out. I thought I'd do it somewhere. All right. Let's see what this actually is before we write it. Because in Excel, this may be too big. All right. Let's give it a shot. 2 to the power of 14 is equal to... Okay. 16, 384. 3 to the power of 1. Okay. So that's easy enough. All right. And to emphasize this, I'm gonna put these in parentheses. Just to emphasize that these are the primes. These aren't the values. The... Okay. So the notation would be as follows. I hope you don't... Okay. The values, I'm putting in square brackets, just like I did before. The primes, I'm putting in parentheses. All right. So then we have... What do we have? We have 5 to the power of 17. I'll forget about that. Okay. That's too big. That's gonna be wacko. So let's not do that. Let's do 5 to the power of 3. Let's try that. So I want you to get a number that's actually something you could grab onto. 5 to... Let's try 5 to the power of 3. That also may be too big for Excel. The end result. Well, I'll show you what the end result is. Let's do that. Okay. Again, here... Now it's 3. And then finally, this is also pretty too big. Well, not bad. Let's see. 7 to the power of 2. That shouldn't be too bad. 7 to the power of 2. It's 49. We all know that, but still. Let's just do this. 7 to the power of 2. Okay. That... These numbers aren't the issue. The issue is that we now have to do the following. We then gotta multiply these. Right? Because these are multiplied together on line 42. Ah! Very good. It came out to be a natural number. Beautiful. Good. So this is the prime girdle encoded number of 14, 1, 3, and 2. Okay. As you can imagine, these are really... This is gonna grow very big. Right? It's just gonna grow, grow, grow. Right? All right. Let me try to do something here. All right. Good. Nice. Okay. So the 14, 1, 3, 2 is this massive number, which... Let's see if I could do... Here we go. Let's get rid of these last zeros. 301,056,000. And that's what it's equal to. Beautiful. Now... So that's what it is. But... Same deal. So that is girdle prime encoding. But... Now we have... So that's P of G. But then Davis subtracts one. Same reason. The same reason. Why? Because... 0, 0, 0, 0. Well, okay. So now this comes another issue. All right. Hold on. Give me one second. This is gonna raise a more significant... Oh, I mentioned here. So I don't have to do that. Give me one second. What did I just do? Yeah. Okay. So Davis is gonna subtract one. Now... But... Since the computer theory wants a bi-objective mapping from n to n, and n starts with zero, the above formula has to be adjusted by minus one. Good. The above... Multiplying prime... Oh, sorry. I give the example. I do it. That's nice. Okay. So all zeros would give you one. And there'll be no number that could decode. There'll be... Zero cannot decode any number. Now, we have a little bit of a problem here. Okay? We have a problem. In the case of pairing functions, we didn't have it. Because the pairing function is fixed as two numbers. The girdle prime encoding is any number of numbers. It's not just four. Okay? So let's put a note here. Note... This example uses four numbers. Four input values. Or four data values. But the encoding is of any number of values. Okay? Please remember that. The example just used four numbers. But the encoding itself... The encoding itself uses any number of numbers. Okay, good. Now... Stop. All right. So there's an issue here. Okay. Okay. The above problem with this is that it's ambiguous. Because any sequence of zeros will yield the same value. So for example, any number of zeros will be equal to one. So how are you going to know? In other words... Okay. Now, remember again. So for... This is for... Who? For... For Godel, they'll become one. For Davis, this will be equal to zero. But either way, it's a problem. Right? So the question is, how do you know how many zeros are encoded? Right? And that's a problem. So what did they have to do? They then had to limit the number of ending zeros as follows. You had to... Because otherwise, it's ambiguous. You'll never know. You'll never know how many zeros were inputted. The only zero input value will be allowed to encode zeros... I know it's only one zero. If you have zero Davis or one Godel, so then that encodes only the input one zero. That encodes a single input value zero. All right? If the encoding has non-zeros, the last numbers cannot contain zeros for the same reason. Because when you raise a prime to zero, you get just times one times one. So for example, 14, one, three, two... Where are we? Same reason. Yeah. So for example, 14, one, three, two... Is gonna be equal to... Right? It'll be all the same. Because the further primes, prime five, six, prime 100, raised to zero is just times one, times one, times one. It doesn't affect the value. That's weird. Oh, I know what I do. I don't know why it doesn't do that. That's weird. Huh. I don't know why it's not going to the left. That's weird. Okay. I won't deal with it. I'm not sure why, but that's Excel, so I'll leave it alone. All right. Anyway, it's still equal to the same value. All right. All right. I'll give an example here. All right. So great. Not a big deal. We'll just change that example. All right. So re-emphasis, fine. What do I do here? Give me one second. Ah, yeah. Now, okay, that's an interesting point. So having a zero, having a zero at the end stays the same value. But putting zeros earlier makes it bigger in value. I can leave it there. All right. All right. Fine. Anyway, just keep that in mind. So you have that slight restriction that you can't encode just zeros. All right. You can't encode just zeros. But if you want to encode a zero, it can only be one single zero. Otherwise, you'll never know how many there are. All right. Godel himself introduced a third encoding called the sequence number. So we have so far the Godel pairing function, which Davis uses for Fibonacci. I believe I work it out for you in the next tab. And then Godel then has a prime factorization encoding, which Davis uses to encode the course of values recursion. And or any recurrence relation could be encoded that way from discrete math. And then Godel comes up, maybe earlier came up with a simplistic, because again, they're not concerned with efficiency. So Godel came up with a very simplistic called the Godel sequence number encoding, also known as GLE, which is Godel length encoding. The L is for length. And that inspired an actual suggested encoding called RLE, run length encoding, which is used a little bit in imaging even until today. But it was inspired by Godel. Okay. So I'll tell you about RLE, which is run length encoding on images discussed below. Okay. Fine. All right. So to demonstrate why an encoding does not have to be a compression, consider, for example, the Godel number sequencing, was a suggestion to encode any list or sequence of numbers. So he had this before he did prime. Suppose we have two, three, and five, and we like to encode them into one number. The convention was to use the number one as a delimiter that indicates whether, where the data, where the data field starts and ends. And zero occupies the data value itself. But here in a very specific manner. Note, the spaces are for legibility only and not part of the encoding. All right. So we're going to encode the three values, two, three, and five. So there has to be a one before and after each value. So you have one, and then two is zero, zero. You have a one, and then zero, zero, zero. Three zeros is three. So the value is the length. And we discussed that last week. That was one of the issues we mentioned. We've finally answered the question, which is, the question was as follows. That since it is technically true that you could build a computer on any positive number base of arithmetic. Again, base two is what our computer. Base three was in Eastern Europe. Base 10, the abacus. We've mentioned this before, but even base one. And Davis spends a chapter, two chapters, proving that. That even base one could technically work. But it's incredibly inefficient. And in fact, analysis of algorithms would not be applicable. You could not properly do analysis of algorithms if you had a base one machine. You could build it. It exists. It could exist. I don't know why you do that, but it could exist. It's just that you would not be able to apply analysis of algorithms. Why? Because in base one, the value and the length is the same. Whereas in base two and higher, the difference between value and length of bits you need to encode that value is exponential. Right? You need log n bits to encode a value n. So in the opposite direction, the value is exponential based on the number of bits. If you have k bits, you could get the value two to the power of k. So you could get exponential out of k bits. The value. All right? So that's why the word length here. So it's not practical, but he suggested this. So the true encoded string would then be the following. One, zero, zero, one, blah, blah, blah. All right? That's the encoded string. So that is called the girdle sequence number or sequencing number encoding. All right? So... But it was used, believe it or not, in imaging. All right? And the idea... And it's called... It's the same idea, or a similar idea, called the run length encoding. So for example, let's say we have 13 A's. Okay? Imagine that these are bits. Imagine these are colors. The example I gave you before, you're storing the values of the wall. Let's say you took a color image, right? And you did it, lost C. So that now, assuming the painter did intend to paint one color on the wall. So then all the values are the same. So now, let's say you have 13 pixels in a row that all have the color value A. So now you could compress that by saying 13 A. All right? Now, good. So that was done, suggested for images. Because again, maybe after JPEG, you get this lossy image. The point is, it assumes that perhaps that a number of the colors, if they're close to each other, will be stored as if they're the same number. So that you may be able to, in some imaging situations, get a better compression if you have all these numbers being the same. It's not a great image compression algorithm. It's not the most popular. But people do use it in the following situation. That is that after you use some other compression algorithm. See, some compression algorithms are known to have the tendency that the compressed file, the compressed data, tend to have similar or identical numbers near each other. Some compression algorithms induce that. So some people use RLE as a second compression. Do your first compression, which is lossless. But then it tends to, some of them do, tend to give a sequence of values in the compressed data that are the same. So then use RLE as a second round to compress it even further. So that's what some people do. All right. So again, even though this would allow you to do a course of values in coding, because you could encode any, it's trivial. Just tack on that many zeros and then follow it by a one. And you've now added another level of previous levels output and just keep on doing it. It's always going to be a finite string, even though that's a whopper. Eventually, that string is going to be a really big, a lot of memory, but also a very big natural number that it's going to encode as. Anyway, so he recommended that. But as we said, he eventually settled on prime number factorization. All right. Now, this, the next two are a little bit more of a theoretical interest. But it's important to mention these are well-known attempts dealing with the above, the above suggestions of Gödel. So why did Gödel settle on exponentiation for pairing function and not come up with a polynomial or multinomial encoding? In other words, for pairing and prime factorizations, encodings rather. Right? In both cases, Gödel used an exponentiation. Right? The prime numbers are raised to the power of. Two is raised to the power of. The question that these more modern researchers had is why did Gödel force himself to use powers, which makes the encoded value such an unwielding big value? Right? Why not use some sort of simpler formula such as polynomials? Can you or can't you? So they were curious. So Cantor did suggest the following. Cantor came up with a polynomial pairing function. And here is this function. Blah, blah, blah. It's there. Fine. And this does map the natural numbers to single numbers. Since p of 0 comma 0, 0, you don't have to subtract 1 here. Anyway, so why did Cantor end up with this? Cantor and Gödel thought perhaps that, you know, he wanted a family of polynomial encodings, but he could not figure out any other polynomial encoding. Meaning, Cantor and Chawla expanding on this. I think Chawla is the one who proved that it's combinatoric. Cantor already figured out the polynomial. Anyway, so the point is it can only... they only knew how... they only knew of a polynomial that does this properly with two values, x and y. Why can't it do x, y, z and, you know, 10 values? They couldn't think of a multinomial. They couldn't expand this to a good polynomial that can be encoded and decoded every value if you have multidimensional beyond the pair. All right, so they didn't do this. So he avoided this. Now, so... So Feuter and Paglia in 1923 proved that the above polynomial is the only possible degree 2 polynomial that can accomplish this encoding. They conjectured that, in fact, the above polynomial is the only possible polynomial of any degree that can accomplish such an encoding. Again, what we mean by such an encoding is what Davis was concerned by, i.e. , that every possible number or numbers, if it's multidimensional, can be encoded and decoded. That's the key part, the decoding. In other words, if you have any integer, you decode into a sequence of numbers based on your polynomial. Right? You don't want partial functions in the opposite direction. Right? But Davis requires it, as well as, apparently, Cantor was bothered by that as well. All right. Anyway, and then, as I said, Chawla then took this show the generalization using equivalent Commodore Tark formulation that can be extended to encode any number of dimensions into a single number. Anyway, it's more of a curiosity. We're not going to go through it. Just as Chawla was able to generalize Godel's Commodore Tark polynomial to encode N numbers, one can also actually generalize the pairing function using exponentiation to encode N numbers. But it can be quite involved. I leave this... In other words, the issue is staying at the polynomial realm. Okay? Staying at the polynomial realm. Chawla did something like this using combinatorics, which technically... But whatever, it's combinatoric. Recursive combinatorics. But the point is that, again, Godel employed exponentiation, which means that the resulting numbers can be incredibly big, even if your initial numbers are a little bit small. So it's a little bit impractical. Anyway, you could, using exponentiation, extend the pairing function to any number of numbers. But it's very messy. Anyway, I once did it with a grad student. If you have any interest and you want to challenge yourself, see if you could do it and send me the solution and I'll look it over. Anyway, fine. All right. All right. As we've mentioned many times before, for these last encodings, remember that efficiency was not a concern. Because nobody was ever going to do concrete calculations with these functions. It was a theoretical result to be able to do these mappings to know that it's doable. All right. Beta encoding. And this is a really different one, but a fascinating one. So Godel came up with a fourth category called beta encoding. It's easier to describe than any efficient scheme. It requires only one arithmetic operation, the mod, the remainder function, and has a very short definition. So beta of x, y, z of three numbers is equal to x mod of one plus y plus y times z. Okay. Now, that's the function. That's not the encoding. Now comes the encoding on line 137. Beta function lemma. Unbelievable. It's fascinating. For any sequence of natural numbers, k0 through kn, there are natural numbers b and c such that, for every natural number of the sequence from the indices 0 through n, beta of b, c comma i equals ki. In summary, the beta function encodes any finite sequence of natural numbers with two natural numbers, b and c, where the ith number in the sequence is generated by the beta function with the specific b and c computed. You can encode any number of numbers by two numbers, b and c. Any number of numbers can be encoded by b and c, assuming you know how many numbers you started with, because otherwise this i will keep on going and you won't know whether that was part of the message or not, one of the numbers. So maybe you want to call three numbers, b, c and n, how many numbers are there? Because you need to know what the last index is. That's unbelievable. And it's a straight arithmetic function. That's super brilliant. As an example, the sequence 2, 0, 2, 1 can be encoded by... No one ever said b and c are nice numbers. Can be encoded by 3,412,752 and 24. And to prove it, here they are. The indices using these values and plugging it in, you get, for the first four indices, 2, 0, 2, 1. So you get it back. Give me one second. Where's the i? The z is i. So the z only plays a role... Give me one second. Something's funny. Give me one second. Oh, hold on a second. Give me one second. Give me one second. Something's funny. I'll go through that. I don't want to bother you. Something's bothering me, but I'm not sure why. All right. Let me leave this as is. Good. Oh, I see. C is 24. Oh, so why? Okay. So... C is 24. Good. Yeah, so it is correct. Okay. So what was bothering me is why the z is not at the end. This is the z, which I'm now highlighting. That's your index. The 24 is the... 24 is the C. So... Give me one second. X equals B. Y... Oh. Y... Yeah. Y equals C. Z equals I, the index. Okay. Good. All right. Fascinating. Now, Goldil was interested in this encoding to present his Incompleteness Theorem, which from our course is a very important theorem you should realize. And the Incompleteness Theorem said the following. Any system of logical proofs, theorems or truths, that includes all theorems regarding arithmetic, you call it the piano arithmetic, it's basic arithmetic, is either incomplete or inconsistent. You think you've done it all, you have not. That's called... Again, any system of proofs, right? You think the AI can prove it all? So either the AI made a mistake or it's incomplete. It does... It does... It cannot solve... It cannot prove all truths. Okay. That's Goldil's Incompleteness Theorem. All theorems regarding that... That... That could incorporate arithmetic based on mathematical proofs is either incomplete or inconsistent. As we discussed, anything that can be represented by a computer is represented by a finite string of zeros and ones, which is a natural number. Thus, any listing of such theorems or truths can be encoded by the beta function. This made Goldil's counting arguments about the proposed system easier. I'm just explaining why he liked the beta function. Anyway, formally... Formally, the incompleteness theorem says the following. And one, no consistent system of axioms whose theorems can be listed by an effective algorithmic procedure is capable of proving all truths about the arithmetic of natural numbers. Two, the above system cannot demonstrate its own consistency. That's the incompleteness. You can't... The system cannot prove that the system is correct. So that's incomplete. Finally, as we mentioned in the last... Now, there may be other things it can't prove. He was only trying to show at least one, and then he could say, oh, it's incomplete. There are other things it can't prove either. But he wasn't interested to figure out all the things it can't prove. He found one. It can't prove your system, which is mathematical, can't prove that your mathematical system is true. Finally, as we mentioned in the last class, Ackermann's function cannot be helped by these encodings. It simply goes faster than any primitive recursive function. As such, this function is said to dominate or majorize any primitive recursive function. All right. A quick example of the parent function. Well, I go through it. You could read it. If we have time, maybe. I give it to you formally. Anyway, he... This is Davis's formal presentation, how to encode and decode, encoding and decoding values using the parent function. And it's specifically how to pass it along. You know, it's rewrite Fibonacci so that it only uses one value. So here's Fibonacci. It doesn't look like Fibonacci. It doesn't look like Fibonacci, but it's using the encodings and decodings. This... All right. Fine. And then... So it goes through the details if you want to read it. All right. Now... I just mentioned... I just mentioned Ackermann's function. An Ackermann's function... Here... Well, I can put it in postage stamps. All right. I'll start off like this. Ackermann's function was not primitive recursive because it required recursing on two variables. Now, there exists a genre of problems in discrete mathematics that most discrete mathematics present as if incursion induction is required over two variables. The names of this group of problems is called the postal or postage stamp problems. Given two... Given two... And it states the following. All right. This is the postage stamp problem. Given... And I mentioned this briefly last class, but I wanted to go through an example to explain why it's not the case. Why it's not an issue. Even though it seems to... Well, a lot of books claim... Sorry. A lot of books claim that... What's it called? A lot of books claim that it requires more, but it doesn't really. And that's why... Because otherwise it should be like Ackermann and it should be a mess. Right? We claim that primitive recursion is based on simple induction. And now these books claim that there is a whole genre of problems in discrete math that require two variables when recursing or inducing. And that can't be true. It's... These are not messy problems like Ackermann. So something gives... And I want to explain again and give a concrete example why it's so. Anyway, here's the genre on line six. Given two denominations of... They're called denominations. That's what they're called in postage. Face values of stamps prove that any postage greater or equal to a specific fixed value can be comprised of some independent multiple of each denomination of stamp. Okay? So you have $1.24 and you need so many 4-cent stamps and so many 7-cent stamps. In fact, there is a somewhat trivial way to rewrite the same problem such that the recursive inductive step only requires multiple. multiples of the smaller of the two face values. So it's not true. We'll explain why you need two stamps, two different value stamps. But it's not true that the recursion requires both of them. It only requires one. And we'll explain why. The requirement for two denominations is only to set up the multiple base cases, which will be a number equal to the face value of the smaller value stamp denomination. And then based on modular arithmetic, only multiples of this smaller value is required to obtain any other further amount of postage. Okay? So it's not true. You'll need the two stamps just to set up the base cases. And we've mentioned, and as mentioned in previous classes, computer theory computes recursion in a bottom-up manner. We start from the base case first, in a bottom-up manner, so that the base case actually occurs prior to, before, any recursive level. Okay? So what you do at your base case, you could do whatever you want. Now, you could use a hundred different values. That's not going to violate the computer. That's not going to violate our definition of primitive recursion, which only allows recursion on, you know, if you want to say one base case. All right, so let's do an example. For example, and you'll understand what I'm saying. For example, any postage... Here's an example. For example, any postage greater than or equal to 18 can be made by multiples of 4 and 7 cents stamps. Okay? Here we go. We now show our approach to this problem, which will show that the recursive step only requires recursion, multiples of 4 cents stamps. All right? And as mentioned above, and as mentioned above, the number of base cases will be equal to the smaller of the two values, here, 4. Because we have 4 cents and 7 cents. So 4. So we have 4 base cases. Okay? So 18 is the minimum. So 18 equals one 4-cent stamp plus two 7-cent stamps. That's 4 plus 14 is 18. 19 is equal to 3 times 4, which is 12, plus 1 times 7, which is 7. 12 plus 7 is 19. 20 cents is equal to 5 times 4, 20, plus 0 times 7. And 21 is 0 times 4, plus 3 times 7. All right? So in all of these cases, these are our base cases. And you can make 18, 19, 20, or 21 based on the four base cases, based on multiples of 0 or more of 4 and 7 cents stamps. Now, okay, so let's just summarize that. Okay, but now comes the recursive part. That's the base case. So any number greater than 21 can be made by multiples of 4 cents stamps added to one of these four cases based on modular arithmetic. Right? So the point is the following. It's always the case. 18, 19, 20, and 21, mod 4 is 2, 3, 0, and 1. But so is 18 plus i times 4, or 19 plus j times 4, or 20 plus k times 4, or 21 plus m times 4. I use i, j, k, and m because the... to keep them independent. Anyways. That's not important. The point is that there are... if you... you just need to go to... whatever number you have... Think of it simply this way. Pick any number and keep subtracting 4. At some point, you're going to either be 18, 19, 20, or 21. But subtracting 4 means that each time you subtract 4, you could use one 4-cent stamp. Then you hit the base case, which may use a 7-cent stamp. Thus, there actually is only one variable that is recursing, namely the number of 4-cent stamps. So the appearance of two variables is only to set up the base cases, which occur prior to the first recursive step. Note, in reality, we could do the above with 7 in the same manner as with 4, but we chose 4, which is minimum 4-7, because that is less base cases. All right? So the postage stamp genre does not change our perception of what we just said. Right? So that's a... it's a cute, interesting case, but it doesn't change everything we just said. Again, the whole point of these encodings was for Davis's purposes to show that you really only have one base case and one previous levels output. And it's all based on simple induction, simple recursion, and the fact that you have multiple base cases and or multiple previous levels outputs does not change or leave the rubric of primitive recursive. Okay. Now, this is... the next tab is a little bit more definitional. Give me one second. I want to see if I have it here open for you. I'm curious. I think I did it in the last class. I want to make sure. Oh, yeah. Here I did it. Okay. Yeah. Oh, so I did... I presented to you this. Okay. Good. One second. Oh, yeah. Okay. Let me... let me include from last class. The notes of the last class had a paragraph here, which I think we should emphasize at this point. encodings. This comes from... the notes in the last class has the same thing. But I think we... this is the... the point is that using the prime encoding, you can encode any number of previous levels outputs. I'll... I'll... I'll clean this up a little bit. But it uses the girdle prime encoding. And that's the connection. All right. Anyway, other recursions. So, this is more definitional... definitional than practical. But I think it should be discussed. All right. The first version that we're going to discuss is called mutually... mutual recursion. Assume prior known primitive recursive functions, G and H. Now define two mutually primitive recursive functions, E and F. E is defined recursively in terms of F. And similarly, F is defined in terms of E. And so, you have two functions that are recursing based on each other. All right. So, E and F get based cases. But the key are at lines 10 through 12. Right? E is G of N comma F of N being the level, F of N being F. And F is H, some other function, N being the level of E of N. So, they use not their own previous levels output. They use the other one's previous levels outputs. Anyway, here I give the proof, but let's not get it. If you want to read it, fine. The point is you could... and this is Davis's proof. But you could prove that mutual recursion of two primitive recursive functions is still primitive recursive. Good. Let's do another one. Simultaneous recursion. So, let's say, again, you have primitive recursive functions G and H. Now, we define two simultaneously grown primitive recursive functions E and F. The difference on line 43 is E is defined recursively in terms of E and F, and F is similarly defined recursively in terms of E and F. They both are defined in terms of their own previous levels output, as well as the previous levels output of its friend. Did I spell that correctly? Yes, I did. Okay, good. Let's just move this over here, and then we can clean this up. Nice. All right. And then, okay, so again, so one more time. You have G and H. Those are your base functions, right? And then you have E and F. And E and F, E is recursively based on E, previous levels output, but F's also F's previous levels output. And F also is based on its own previous level output, F, but also the previous level of E. Again, on lines 48 through... So their base cases... Oh, I wrote Q and R. Why did I do that? I use Q and R, not G and H. Oh, there are G and H. Okay, one second, one second, one second. Oh, no, no, I see, I see. Okay, fine. We're good, we're good. All right. All right, so now, on lines 48 through 50, E is... So G does whatever, primitive recursively on the level number, N, and the two previous levels outputs, E and F. X is just the auxiliary input variable. And likewise, F as well. We could have put this X. Remember, X is just an auxiliary variable. X is an example. It doesn't do anything, it's just for the ride. We could have done the same above, over here. If we wanted to, we could have done... also could have G of N comma F of N comma X. Okay, whatever. For some reason, I think I took this directly from Davis, so for whatever reasons, I guess his example, he didn't need the X. But the point is, you could have some generic X here. Right, in these auxiliary variables. Sorry, one second. My point is, you could have auxiliary variables wherever you want. You don't need to have them. All right, anyway, that's called simultaneous recursion. Okay, now, two more I have to mention. We're not going to spend too much time on it. And it's a shame I'm only giving it the last 60 seconds of class. But, they're important. You're going to learn this analysis of algorithms. One of them is called divide and conquer. And that's the basis of what's called the master theorem. Some people, also known as the master method. I don't know, some people call it the theorem. Some people call it the method. Whatever you want. Anyway, the idea is, let's say, for example, merge sorts. We won't go through it. You can look it up from your discrete math books. Divide and conquer recursion differs from other instances of primitive recursion. And that decision of which previous levels output you need to define the next level is based on a ratio of distance. So, level n is based on level n divided by b. Okay, so that's very different. Because, till now, we did the previous level, minus 1, minus 2. This is n divided by b, and that's called divide and conquer. Then you have something called dynamic programming, which is a misunderstood recursion, in that it's not... The problem here is... It's a recursion, dynamic programming. It seems to be a standard recursion, and it's usually solved with a double for loop, which you may think is n squared, but in reality is treated as exponential. And the reason I'm mentioning this is that it has to be. These problems are what are called NP-complete problems, and we've studied them. And we know that those are not polynomial, assumed to be not polynomial, and hence are exponential. So, yet, you have dynamic programming for all of these. And yet, dynamic programming. And the dynamic programming books, the analysis of algorithms books, is going to have a double for loop, which sounds like n squared. So, obviously, that's not the case. It goes beyond us to go through this in any given length, but I want to make you aware of it anyway. Look up the definition of dynamic programming. It's a type of recursion. The point is that it uses... Recurses at different levels, okay? But not together. Anyway, it's a mixture of levels, but part of the program works on one set of levels, and the other part works on a different set of levels. It's not in sync. Anyway, whatever it is, look it up. Maybe I'll throw in the definition later. All right, anyway, it's the end of class, so kindly, if you didn't have a chance, please text the word PRESENT. Again, make sure that you log in to your new emails. So, it's your old email, but not at Qmail, but you have to log in with login. cuny .edu, and you will have access to the notes. I have to send it to the new system. The old system gives me an error. I can't send it to Qmail. Every time I do, and students sometimes ask me, have asked me the last week, and it just sends me an error. All right, it's the end of class. If you didn't have a chance, please type the word PRESENT, and we will continue on Wednesday. Wednesday, we will continue with Cook's actual theorem. I think that's what we'll do Wednesday. I'll double check if I think so. All right, bye now.