Good morning. I'm not sure why, but there was a little bit of a snafu in signing in. So if you could text at your presence. Appreciate it. Good. And let me set up the board. We'll get started. Today is April 22nd. Let me open up the notes from April 22nd. Where are they? And of course, none of them show up. Here it is. Oh, good. It does show up. Sorry. Okay, just an updated... updated... what do you call it? Updated schedule. It's the same. Just we're one class less. So April 22nd is today. Monday will be the last of the lecture. New material. No keywords in MCQ are finished on the 27th, except the last one. Research for your paper. Your presentation. April 29th is the quiz. During the regular slot of time. The first week of May is review. And then the week afterwards, we split up into the final. Et cetera, et cetera. Okay. What we'd like to do today is to build up towards Cook's theorem. So Cook said... and I'm going to start slow by just making sure everyone has the same prereq knowledge, which I'm sure you do, but just in case. So I'll start with automata in general, and then we will move up to Turing machines, and then we'll move up to Cook's proof. So Cook showed, we know, in 1971, that all NP-hard problems, NP-complete problems, reduce to satisfiability. Satisfiability means that you have a Boolean predicate, which the inputs are zeros and ones. Again, that in discrete math, they do not actually require that the inputs are zeros and ones. But, well, okay, so let's look at that. Whether the inputs are or not really don't affect us for the proof. So you have inputs to a Boolean predicate, and the output can only be zero or one. True or false for zero, and true is one. And the question is, to satisfy your Boolean predicate, can you find a sequence of inputs? Your Boolean predicate, being a function, has multivariables, has many inputs. So can you find a sequence of those inputs such that... a listing of those inputs such that the Boolean predicate will be true? If it is, then you have satisfied the predicate. That's the terminology. All right? And it could very well be that there are different sequences such that your Boolean predicate will be true. But we just need to know one. Okay? Satisfiability is just a true false decision problem. Does it or can you or can't you? It's not about... it doesn't actually necessarily give you the actual result. It could, but it doesn't have to. All right. So now... So this... the original proof was based on a Turing machine. So we need to build that up. So a Turing machine is, after all, just an automaton, a machine, but plus. So let's just deal with the lowest level simple automaton. A simple automaton is a weighted graph. Okay? This machine M is over an alphabet, which traditionally is sigma. Okay? The Greek letter upper sigma. All right? And as a graph, so V are the vertices, but we don't call them vertices. We call them states. And the edges are called transitions. All right? The vertices, generally, you don't use V, but for some reason you use Q, and that has stuck. That is the accepted notation. All right? Now you have a weighting function. And this weighting function maps edges, not vertices, edges, to not actual weights, but to, what you call it, to letters of the alphabet. Okay? And in addition, you must have a start state, a specific state, a specific vertex, where your processing starts there. Okay? And traditionally, that is Q0. All right. Now, the final states are, the final states is another requirement. Let me get that right. Traditionally, QF, but doesn't, or F, actually. It's actually more traditionally F than QF. Final states are subset F of Q. So, they're a bunch of states. Now, subsets, note, subsets could be empty. So, subsets can be empty. So, it is possible that a given automaton does not have any final states. I know it's a little bit ironic, because we said you must have. But by saying you're a subset, you're allowing the empty set, because the empty set is a subset of every set. In which case, it will be a trivial machine in that everything is rejected. But so, that's the definition. The language of the machine, L of M, are any strings over the alphabet, sigma, that when starting in the start state Q0 and processing each character of the string according to the rules embedded in the weights associated with the transitions edges, the machine will end up in a final state. Okay, and we'll do an example in the next tab. Okay, now, just a little comment about weighted graphs in general. This is a weighted graph. As I noted on line two, just a weighted graph in general, vertices, edges, and W, or weights, a weighting function. In general, edges, only edges are weighted by convention. It's not a requirement. And also by convention, positive numbers. All right? But we have here, but in the case of, but in the case of automata, in the case of automata, the weights are tokens, or alphabet letters, alphabet symbols. Okay, so they're not numbers, they're letters. Again, you should, you probably have done this before, but we are going to do this in a second. All right, so just to summarize, machine automaton, instead of using VEW, let's use the language or the conventions we have. So a machine is Q, delta, and sigma. Sigma is the alphabet, which are tokens, symbols. And these are, as I mentioned, the weights of the transitions. Q is the states, which in normal graph theory would be vertices. Delta is a transition function. Which it's small, small delta, transition function that specifies edges called transitions. And then how's it map? Delta maps, you're in a state Q, cross product. You have to provide what letter, what letter are you assuming you have processed? Because you're writing these rules ahead of time. So you're in a state Q. You've processed a letter A. And now what state do you go to? And that's it. F, as we said, the final states are a subset. Yeah, fine. And Q0 is the start state. Okay? And now we'll get some more details. And let's do an example in the next tab. All right. Admittedly, the picture in the next tab is not the greatest because it's a little hard to draw in Excel. But here it is. So we have four states, Q0, Q1, Q2, Q3. Now, there are some conventions. Ignore the edges and the tokens for a second. If you notice, there is here, which I'll make more prominent by putting a different shade here. So there is a little, you know, the front head of an arrow. That's what it's supposed to be. Instead of writing an arrow, they just wrote a front end of the arrow. And that represents the start here. Okay? It's also known as the greater than sign, but that's irrelevant. Anyway, it's meant to be a pointer to say start here. And again, conventionally, that's Q0. All right? Now, the rules of the edges we'll talk about in a second. You go around, you go around, going around, processing one inputted letter at a time. When you're done processing the inputs, you want to end up in the final state. How do you know what's the final state? So the final state is going to be... it's emphasized by a double circle. So although we wrote squares, double enclosing, could be... Generally, they are circles. But typical drawings... I'll make it a little bigger in a second. Typical drawings have the states in circles. But being Excel, we have used boxes. It's just easier to draw. It's just easier to draw using boxes. All right. Now... Let me make this a little bit more air... There we go. All right. Now, Q2 is the only one that has a double ring circus. Right? Right? Double rank. And that is meant to be the final state. The start state could also be a final state. Any of the states could be a final state. It's any subset. Right? Any subset is going to be as possible. In this example, only Q2 was the final state. All right? So this is our final state. All right. Now, how does it work? So this is a graph. So we can have an adjacency matrix over here in columns R through V on the right to upper right. So again, the rows are the from and the columns are the to. So Q0 goes to Q1 with an A. There's an edge from Q0 to Q1 with an A. And there's an A. It says here, Q0 goes with a C. Follow the yellow brick road. And it's a little bit, you know, a little bit funny in that. Let me see if I can get this mouse a little better. Here we go. No. Okay. I have to move it over. So it's a little slow. But anyway, here's the C. And if we follow it, it goes to Q2. Right? Now, this is interesting because you're in the final state. But the same way you enter, you could leave. And that's going to be one of the examples on the right. We'll do that in a second. And then finally, we said that Q0 has B to Q3. Not the B coming in from Q1, but the B going down. So this over here, this B. All going down. Right? And that goes to Q3. All right. Let's do one or two examples. Some examples. So I have here, for example, let's say your word is A, B, A, B, A, B, A, B, A, B, A, B, B, C, and then we will do A, B, A, B, B, C, and then an interesting one, B, C, B, which we'll explain in a second. All right. So what happens? A, B, A, B, B, C, C. So we're in Q0. Follow along. So Q0, there's an edge with an A that gets you to Q1. Then comes the B from... Wow. I changed it at the last minute, but I didn't change this. Okay. Give me a second. Wow. Okay. I meant to write B, B, A, B, B, C, C. B, B, B, A, B, B, C, C. Okay. Start again. So Q0 with a B goes to Q3. Q3 with a B goes to Q1. You know something? Oh, man. I changed the arrow. Give me a second. Okay. This is a little bit messy. I'm just curious. What did I do here? Give me a second. Yeah, I did change it. Oh, messed. Messed up. Give me a second. All right. So let me change this. All right. We're going to do this differently. Let's work backwards. So Q0 with Q0 with a... All right. It'll take a minute, but we'll get it fixed. I did the inputs, and then apparently I changed the graph at the last minute, so it could be easier to read. But in doing so, I forgot to change the input strings. All right. So Q0, we wanted to end up in Q1. So we want a Q2, C. Q3 to Q2 with a C. That works. Q0 to Q3 with a B works. Q1 to Q0 with a B works. Q0 to Q1 with an A works. Oh. We could do AA. Let's try that. Q0 with a A goes to Q1. Q1 with a B goes back. That's what I had. I don't get it. Okay. Let's do that again. Q0... Oh, boy. I must be sleeping. Q0 with an A goes to Q1. Q1 with a B goes to Q0. Q0 with a B... with an A goes back to Q1. Q1 with a B goes back to Q0. Then Q0 with a B goes to Q3. Correct? Then Q3 with a C goes to Q2. And Q2 with a C goes to Q1. We reject since Q1 is not a final state. Okay? So it ends up in Q1 in this first example. Now let's do a different example. A, B, A, B, B, C. So here, again, Q0 goes to Q1 with an A. Q1 with an A... with a B goes to Q0. Q0 with an A goes back to Q1. Q1 with another B where the second B goes back to Q0. Then we have the third B that goes to Q3. And then the C goes across to Q2. And that accepts since Q2 is a final state. Just to emphasize the final state, and since we used boxes, I'm going to put brackets here just to emphasize the final state. All right, now, B, C, B... So one thing interesting about the first example, which is going to be more emphasized in the last example, is the same way you went through 2Q2, you don't accept it because you're not done yet. In the first case, the first string on line 9 over here, there is a C. And that extra C gets you from Q2 going to Q1. All right, it gets you to Q1. That's so... you could go... in the middle... in other words, the final state is actually not relevant while you're processing, right? It's true that you have it here in the diagram and while you're going around, it's on your mind because you want to end up in the final... you want the string to end up in the final state. But it's not actually relevant. The issue of final state has no relevance while you're processing, while there are more letters left. The issue of final state is only relevant after completion of the... after completion of the... of the processing, of the parsing of the input. What is going on here? I don't know why. Something went weird. Okay. And here, that last C, that last C, what you call it, got you... you know, got you to Q1 instead. Just curious if this would be helpful to you. Maybe. Maybe that'll be helpful to you. Give me a second. Let me add that. No, it's a reminder what the letters were that got you from there. So, Q0 with an A got you to Q1. It's a little bit more consistent with the delta notation that we just read. All right. All right. Q1, Q0, Q0 with a B, Q3, Q3 with a C, and then Q2 with a C. Not bad. I think that's a little bit easier to see... to understand. So, here... A, B, here A, here B, another B, and then C. Finally, C. Okay. The question is this last one. So, here are B, C, and B. Why is this an issue? We've touched upon this before when we defined and described partial determinism. Which is that... let's follow it. Q0 with a B goes to Q3. Q3 with a C goes to Q2. But now, Q2 with a B goes where? We don't know because there is no definition. Right? So, Q2 with an A goes... Q2, the final state with an A goes back to Q0. Q2 with a C goes to Q1. But B is not defined. A state. State Q2 with input B is not defined. Right? Course reference, partial non-determinism. Anyway, the bottom line is that in the... whatever the philosophy is of partial non-determinism, we reject. Even though Q2 is final. And that's even more interesting. So, the example I always gave... I think I may have used this analogy once before. But anyway, the example I like to give is the Tom Hanks movie. Cross reference, Tom Hanks in the terminal. Right? He has this situation. He's in JFK. And his passport is no longer valid. Neither from the country he came from, nor from America. But he didn't step out of the building yet. He didn't go out through customs yet. The reason was there was a political coup. Anyway, so he stuck. His passport is not good going back. And his passport is not good going into the United States. So, the movie he lives in JFK Terminal 4 for a bunch of months. Really, really interesting. What's really beautiful about the movie is, at the end, they reveal why he went through all of this mess in the first place. And it was to honor his father. Anyway, it was a beautiful movie. Worthwhile to see. Really worthwhile to see. Or I guess now take it out of the library. All right. Or buy it. Whatever you want. However you do it. Okay, so that's the case. Good. So, one last thing. And these are the conventions. I don't have to go through it again. If you want... These are short note discussions, which we've discussed. All of this has here. All our discussions, I've... Well, just, you know, it's now. When we're discussing things, I typed it up. All right. So, that's what it means to be a machine. Now, let's go to higher levels. So, we know Noam Chomsky had four levels of computable languages. But some people include an outer ring. Because the emphasis here is computable. He did computable languages. But, in fact, there are unsolvable, non-computable languages. So, there really could have been a fifth level. Noam Chomsky was not interested in that level. So, some people call that outer ring, unsolvable languages, minus one. Okay. We talked about that. But the key is, what are the machines for these levels? So, regular is type three. Again, Chomsky did not call it minus one. The irony, as we've mentioned many times, is that the higher levels have lower type numbers. So, regular is type three. And we've discussed, just now, the finite state automata that accepts it. The only thing is, it does not have any extra memory. All right. The state transitions... Now, so, we're going to talk about for these four... For these four, we're going to talk about... Well, you know what? That'll be a little bit confusing. Maybe this is better. All right. For these four, we're going to talk about... For these four, we're going to talk about the corresponding machines. So, regular has a finite state automaton. Context-free has a push-down automaton that accepts all context-free languages. Linear bounded, context-sensitive have linear bounded automaton. And automaton is just plural. Turing machines are what accept the unrestricted language, the highest level. The difference... If we take a note here, the similarity of these is that all the transitions are the same. The difference is, first of all, that the memories are different. How much memory you have. So, for the case of finite state automaton, you don't have extra memory. The push-down automaton has a single stack. Linear bounded automaton has a tape, but it's order n, linear in length. And the Turing machine has infinite length tape. Now, what is different is other actions due to the memory. So, in the case of finite state automaton, there are none because there is no extra memory. In the case of the stack, you have push and pop onto the stack. In the case of the tape, you can move the tape left, you can move the tape right, or you can print on the tape. So, now, what's interesting, I think, here is that... I'm sorry, no, I want to do this. Yeah, what's interesting here is whether... The question is, we have an input alphabet. Do we need an output alphabet? Colloquially, what you put on the auxiliary memory is your output alphabet. So, is there an additional alphabet? Some people call it the output alphabet. It's not really, but some people call it that. Anyway, is there an additional alphabet for the extra memory? So, this is most pronounced in the case of a pushdown automaton. We'll explain why only that one. But anyway, so pushdown automaton could have this delta. Capital delta is this right angle. So, Q is the state. Sigma is the regular input alphabet. And then delta could be... The capital delta could be the output alphabet, which you push and pop on the stack. Delta, small delta is the transition function. Q0 is the start state. And if there are any final states, that's F, a subset of Q, right? So, the Q, sigma, delta, Q0, and F have all been defined. But in addition, delta is the stack alphabet, the characters that will be push and pop from the stack during the transitions. The stack alphabet can be the input alphabet as well, although most... although many don't. Now, here's the interesting point. And why did I say most pronounced for PDA? Both the context-sensitive and unrestricted languages use a tape. Linear-bounded automaton is a tape, and the Turing machine is a tape. And recall that when using tapes, the input and output are what are on the tape. Thus, for these automata, the output alphabet, the storage alphabet is not a special alphabet, but the input alphabet. The definition of input and output when you have a tape is what's on the tape. So, it's the same alphabet. But there is a need... Let me clean this up here. Let me put this here. But there is a need for this new alphabet to contain one or more extra symbols termed markers. One, a special blank symbol, not lambda, the empty string, but an actual blank, like on your keyboard, to indicate that the current tape cell does not contain an alphabet letter. This is the case when purchasing a new tape, for example. Right? What's on the tape? It's not... Nothing is not on the tape. The tape has something, and that is blanks. Or in memory, it'd be zeros. Anyway, whatever. And then you could also have extra marker symbols, which can be used to organize the data better on the tape. A simple example of this would be if the input-output contained multiple data values. It would be easier to identify the start and end of a given data value storage if it was delimited by special marker symbols. So imagine you had a hashtag, and then you know what your first input is, hashtag, what your second input is, hashtag, et cetera. For example, right? Imagine you had... Just trying to edit something here for you. Good. All right. So imagine you had that. It would make it much easier to parse. Okay? That's not a requirement. So in fact, not everyone even uses these. Everyone must have a blank symbol. But not everyone uses... Davis tries to avoid using these. But once in a while, he does. He likes keeping it as simple as possible. He does have it in one case. And Cook did use it, in fact. We're not going to see that. The reason we're not going to see it uses this extra marker. The reason we're not going to see it is, Cook's paper was written for mathematicians. And it has lots and lots of diagrams. Uh, no, sorry. I apologize. It has no diagrams. And that's the problem. It's lots and lots of Greek symbols and mathematics. Or maybe not... I don't think he used... Well, he may use some Greek symbols. The bottom line is, it's very hard to read. Unless you're so attuned to mathematical notations. Uh, the proof... The proof we're going to give is identical, but we're going to use diagrams. And then all of a sudden, this proof becomes so much easier. Anyway, but, uh, he needs to separate, uh, something. And, uh, he uses an extra mark... Marker as a... Uh, as a result. All right. And one last note that we know. Although the, uh, the Turing machine has an infinite amount of memory available, one cannot actually use the infinite, infinite, infinite-ness of that memory, since that would require an infinite loop. Right? In order to access infinite memory, you would have had to move one cell at a time an infinite number of times. If that's the case, you have an infinite loop, which goes against Turing's notion of an effective computation, i.e., the machine must halt, stop, before giving any output. All right. Good. All right. And here's a little bit of a discussion, uh, which we could read up on. It sort of describes what we just said. One last thing I want to mention. Ah, okay. Um... One second. Ah. Uh, just to remind you, what was the role of non-determinism? So, uh, non-determinism, so regular, at the level of regular, non-deterministic and deterministic is the same. For context-free, uh, deterministic is a proper subset. In other words, it can't do as much. Let me insert a symbol. Here's the subset symbol. It's actually a proper subset, not just subset. Um... Oh, here it is. Uh, ouch. Wow. What happened there? Oh, not good. Better. All right. Here, uh, determinism actually is a subset of non-determinism. In context-sensitive, nobody knows. And, uh, here, deterministic equals. Not... In the Turing machines, they're equal. All right? So, that's what I mentioned. Now, one other little point. And, uh, the last point is... Um... No. Let me talk about that. Here. All right. Um... One second. There's another line here. No, that's fine. All right. So, what we're going to need is... We're going to discuss different variations. All right? We're going to talk about different variations, but it turns out, um, they all compute the same. Uh, they, they don't have extra power, even though you may have what seems like more memory than the Turing machine. Um, but in fact, they're all the same. Uh, what's an interesting one, since we talked about, we emphasized PDA, push down automaton, is what happens if you had a push down automaton with two stacks? So, that's an interesting one. So, you have your state transition table of PVA, but you have two stacks, perhaps two different alphabets, right? So, Davis proves that, in fact, it's equivalent to a Turing machine. And here, if we, we make this drawing this way, or even this way, you'll realize that, in fact, it's, it's as if it were two tapes. It was one tape. And then you copy from the left to the right, or you can push and pop from, what's pushing and popping is the same, it's simulating. Um, since push and popping can simulate moving the tape left or right. Uh, pushing can simulate print. Well, push and pop and, and likewise simulate printing. You're just replacing. So, you could pop and push, push and that's as if you wrote, you wrote over. Uh, and similarly. All right. So, you could simulate it. So, a two stack PDA, in fact, is equivalent to a Turing machine. All right. So, now let's get to, uh, Cook's, uh, well, we're going to do some variations of the Turing machine. And we're going to choose the one that's easiest for Cook's proof. Cook didn't actually choose this one, but we will. It'll make life so much easier. And they're all equivalent. Davis proves it. Um, all right. Anyway. All right. So, most people say things start with what's called the quadruple Turing machine. And if you recall, so the delta function, our transition, the standard transition function is embedded in this quadruple. How's it embedded? Because you have the following. If you got rid of... Uh, so the standard transition function is this. You're in a state QI. You read alphabet symbol. Uh, here it is. If you are in a state QI, and the scan head point is reading from the tape symbol AJ, well, then you should move... Should move... Either move left, right, or print over the symbol AJ with the symbol AK. Okay? That's... Well, that's the extra part. That's the extra part that's added to our standard transition function. Okay? And all of these depends on how the user defines this rule. In other words, the user says what to do. And then, the transition moves... Should then transition... One second. Should then transition to the state... to the state QM. All right? So that part, what I bolted just now, that part is the standard embedded transition function. Now, it may not actually have to move in this version because it could print instead. But anyway, that's the standard part. And then, in addition, you have this extra piece. In fact, let me do it this way. What I mean is added... We added this to the standard transition function. All right? So embedded is here, but then you have these options to move the tape left, move the tape right, or print. Okay, good. Now... All right, the second variation is called quintuple. Here, you must print. The options is whether you move left or right. See, on line 10, for the quadruple, the option is left, right, or print. That's why we said, note on line 17, that it may not actually move the tape because it does not have to move under this definition because you could choose to print instead. You could print instead. But here, you must move. Because the print... You must print and you must move the questions whether left or right. Again, all of these have been shown by Davis in the book that they're equivalent to each other. All right. That's variation number one. Variant number two. Instead of a tape that is infinite in both directions, use a tape that is infinite in only one direction. So a one-directional tape. This has an advantage. Here, there's a definitive index zero for the tape. Okay? That's very important. It's important because in the version of the proof that we're going to show, we're going to use variant two. It's easiest. Because now we can have addressing. See, in infinite tape, you can't say an address. You could say a relative address. You could say three cells over. But you can't say the first cell, because it's infinite in both directions. But if you make it finite starting at some place, you could say that's zero, and then infinite in one direction. So you could do addressing. If infinite in two directions, then there is no index zero per se, since it cannot have negative indices. All right. That's the variant number two. Now, variation number three. Another variation on tape is that the tape is two-dimensional and infinite in both dimensions and in all directions. I seem to recall a cartoon point-dexter who had this. But I have to double-check my old cartoons. He had this infinite two-dimensional board that he would do science on. All right. And in some sense, you could say analogy is the Excel worksheet. In some sense, it simulates as if if it had an infinite number of rows and columns. Variant number four. A multi-track. So the variant number three was a two-dimensional and infinite in both dimensions. There we go. Another variation is a multi-track infinite tape. So here, it's one tape, but there are different tracks. So there used to be eight-track tapes. I don't know if you know that. But anyway. All right. And the point is, so there are T-tracks and T-symbols can be stored all in the same location. And T-heads. Because you need to read the letter from each track. So multi-track is no better than a regular standard Turing machine. And finally, in addition to or beyond multi-track, what would happen if you had a multi-tape machine? Here also, it's no different. You do not get any advanced computational power beyond Turing machines. All right. So again, the Church Turing thesis is sort of upheld in the sense that there is nothing more computable than the Turing machine. All right. So we are going to use a combination of the quintuple machine, number one, And the. .. what do you call it? And we going to use... in our proof, we are going to use the very... we are going to combine variation of the quintuple. and infinite in one direction only. Alright, so in order to understand what the proof is about, let's look at the following table. The idea... I wonder if I wrote it in the previous... I think I may have. No, well, not exactly. Okay, fine. Well, years ago, we used to do something called Play Computer. Now, Play Computer simply meant you simulate the computer computer... you simulate the computer computation with pencil and paper. Anyway, so the question is if we wanted to store what's going on on the Turing machine throughout the entire processing, how would we store it? How would we display it or store it? Alright, so first of all, we need a column to do the cycles. Right? Each cycle. Each CPU cycle. Call that time. So, times... let's say we start with zero. Alright? Or... well, no, no. No, we don't start with zero. We start with one. Zero meant when you got the tape before you put it on the machine. So, when you got the tape and you put it on the machine, for simplicity, we are going to start time with one. And we're going to start the address on the tape with one. Remember, we're fine. It's a finite tape on one side. Infinite on the other side. So, let's say on the left, we could say the first cell is number one. It just makes life a little easier. Alright? And the time... Okay. Time zero is the empty tape. Before you get started. So, before you get started. So, before you get started, you have to input... input the tape... input the cells of the tape. Alright? And a bunch of them will be blank. Along the way, while you're processing, unlike line 49, in the middle, each row will contain characters printed. And then finally, when you're done... Well, depending how much time you have. So, then your output will be printed here. Now, if you notice, even though on line 49, the words are going over, spanning all. .. You know, lots of cells of the tape. Davis has the convention, and he does it in his book, to make sure that your input should always only be in the beginning and nothing afterwards. Strayed. Straying around. And the output also is for orderly fashion, can only be in the beginning and you cleaned up the tape. So, at the end, the output are the only printed characters remaining. Davis has them all at the beginning of tape. Well... Well, again, it's true, but... Let me leave it alone. We don't actually need that, but that's, again, Davis' convention. Alright. Fine. Alright. So, what are we gonna start? So, we're gonna keep track of the time. Now, we're in NP problems. So, the amount of time is P, polynomial of size of input, N. So, it's poly. We have a polynomial clock, and that's how much time we're giving it. So, if we keep track of what's happening to the Turing machine at every clock cycle, well, then that means that this table, so you have one for the empty, one clock row, zero, before you start it. But then, you have polynomial clock cycles, which is a polynomial amount of rows, and we're interested to know what happens to the Turing machine at each of these clock cycles. Now, because we can do addressing, so we know where we have a pointer, which cell we're up to, which memory location we're up to, and we can give it a number. Initially, because again, Davis' convention is you always start at the beginning of the input, and you always end with the pointer at the beginning of the input, which for us will be the beginning of the tape, so that you start and end in position one. Now, the goal in processing is that you start in state Q0. Actually, well, okay, the machine is automatically set to Q0. And when you're done, you should be in some QF, some final state. All right? And then, the actual portion of the tape. Okay, so this is the portion of the tape over here. This is the portion of the tape that actually has symbols. Now, remember, it's infinite in one direction, so not every cell is going to have. .. what you call it? It's going to have symbols on it. But we only need to keep track of a board, of a chart. A table here. A table is a better word. In our table, we only need to keep track of the cells or the number of cells that will have something printed on it. Okay? Now, since you can only move the tape one step at a time. Quintuple always prints, but then it moves it left and right. So you can only move one step at a time. So the most number of cells, a portion of the tape, the longest portion of tape that you ever have to worry about is polynomial. Because in worst case, each clock cycle, you moved the tape to the right. In which case, the portion of the tape, the most you can have, because you only have polynomial time. And you only can move it one cell at a time. Even if you moved always to the right, you cannot have more cells written into it. Now, you must always print. So something is always there. But you cannot have more than the time. One per clock step. So the amount of the portion of the tape is infinite. But you could not have accessed more than a polynomial amount of tape. All right? And a few notes over here. So I put a star by poly. Single star. The note is the following. The number of rows will not be more than poly since the problem is under discussion and NP complete, which must compute in a polynomial amount of time. Okay? And then tape cell number one, double star. The note says the following. The reason we can start the first column of the tape, we can start by or on. See? We can start on the first column of the tape definitively as tape cell number one. No. Now, it's better the way I wrote it. The reason we can start the first column of the tape definitively as tape cell number one is that we're assuming the variant that is finite on one side and infinite on the other side, in the other direction. This was Turing machine variant number two. And then finally, here, polymounted tape, a triple star note. The reason we only have to consider a polynomial number of cells in Cook's simulation is that we assumed Turing machine variant number one, the quintuple machine, which could only advance one cell at a time per clock cycle. All right. And here, and we have an NP has at most a polynomial amount of cycles, time cycles, cycles for time. Okay. All right. All right. I'm not the only one, the only person who thought of this. I actually thought of this independently, but later I saw other people had it too. It's similar to the diagram found in the Wikipedia article. Just this is my... When I did it, I did it on my own. I realized that Cook needed a diagram. All right. Cook also realized these last three points and did so using the original quadruple Turing machine, but we're going to use the equivalent quintuple Turing machine. Oh, equivalent Turing machine variants to make the proof easier. According to variant number one, the quintuple machine, the machine must move left and right at each step of the processing. Since the processing only has a polynomial amount of time, I just said this, the machine can move at most a polynomial number of cells if it moves a sufficient number of cells to the right. Thus, only a polynomial amount of columns are in the table. Based on this, the table above only requires a polynomial amount of space. Since polynomial number of rows, time, and polynomial plus three columns, one for start, of tape, and finally, poly times poly equals poly. You have one through poly n, and then you have zero, one more, and then you have the cells, which can at most be poly, plus the three beginning ones, which store the information. So you have poly plus three columns in this table, and poly plus one columns of rows. If you multiply that together, poly times poly equals poly. So you only... this table only requires a polynomial amount of space. Consider the row for time equals zero. This is the status of the machine before the processing begins, and we mentioned this. Okay, fine. Now, here we go. Now we get to the core... that's setting up what's going on. Now we get to the core of the proof. If... if the non-deterministic term machine solved a given NP-complete problem, such as job scheduling, for example, randomly chosen, then the above table has all the information you would need to understand the processing. To understand how the term machine processed the input data and obtained the outputs. How can we verify that the above table documented the correct pathway to obtain a solution? Okay, so... so let me... let me repeat what... what the story is here. This... this right here is the key to the whole proof. This was the brilliance of... this was the brilliance of, um... You know what's that mean? The brilliance of Cook. Okay, let's... let's just understand this one more time. What we're doing, as I said, is playing computer. You can do this by hand. It'll take a lot of time, but it won't take an infinite amount of time. So how much information do you have to write down in order to, um... to... what to call it... in order to describe what's actively going on in the Turing machine at any... any given, uh, clock cycle? I say actively because there's an infinite amount of tape that's inactive. You're not using an infinite amount of tape. You're only actively using a polynomial amount of tape. So... so... so... by the way, we should remind ourselves of definition. Um... Davis calls... a... snapshot. We've mentioned this much before. Uh... the... as, uh... i comma... small sigma. That's the notation he uses. Might as well use his letter. Where... Where's small sigma? Oh... here it is, yeah. Okay. i small sigma. Where... Uh... we need one more. Where i... uh... is, uh... the step we are up to. u... clock... cycle... i... time... and sigma... sigma... No, it didn't work. All right. Let me, uh... insert it, uh... separately. Insert sigma. I just want, again, want to use the letter that Davis actually uses. And sigma... is... the, uh... data... values of all... active memory cells. Space. All right. So, in essence, then... In essence, each row is a Davis snapshot. I'm just emphasizing it was Davis who points this out, this definition. But, uh... no one calls it a Davis snapshot. It's just called a snapshot. All right. Fine. Because the clock, we know each row tells us which clock time we're at. And the, uh... the information here on the tape cells tells us exactly what's on the active cells of memory. All right. All inactive cells are assumed to be blank. Okay. As we said above. You input... you input the load on row 45 here. Times clock zero. Uh... You input... the input's loaded here, but everything else is blank. Rest of tape is blank. All right. So... So that's the case. Fine. Now... Oh, yeah. So what I want to show you. So let's imagine... So what Davis realizes the following. Regardless of which NP-complete problem you're doing. Whether it's job scheduling. Whether it's chromatic number. Vertex cover. Node cover. Knapsack. Whatever you're doing. Um... It can be represented by this table. In other words, you could simulate this deterministically. Once you know... Once you know what the Turing machine is going to do, you could simulate this using this table. Okay? Everything you need position-wise is here. Now you have to fill it in. All right. Now the key point we mentioned is that poly times poly is poly. That's a really key point. Poly times poly equals poly. There's only a polynomial amount of cells or spaces... Locations in this table. This table, lines 45 through 51, A through I. This table only has a polynomial amount of positions. Now, what you're going to do, regardless of where it came from, you're going to ask your incredibly brilliant non-deterministic computer, please fill this in. You're going to ask your AI, but imagine the AI knows what it's doing. Always, right? That's the assumption of non-determinism. So you're going to ask a non-deterministic AI, I want to solve job scheduling. I'm going to fill in row zero. You fill in rows one through poly n. Tell me what the Turing machine would do step-by-step till I get the final state with the actual output. I want an effective computation. And it must be done in a polynomial amount of time. So your non-deterministic AI is going to fill this all in. And it's going to do that regardless of what problem it came from. So the only issue becomes, how do I verify whether this table does what the non-deterministic computer said it did? We have to verify. So to verify the initial and final rows is trivial. The initial row must start zero, one, Q zero. The input's here, you know, and the rest is blank. And then you must be in position one and final state. You have the output, so therefore you know the solution to your problem. And the rest is blank. That's easy to verify. OK? Now, consider how do we verify the in-between rows? So let's take a peek at this table. So we're at time i, row i. And the next row that we're about to enter into is i plus 1. At time i, we're in position j on the tape. And we're in state Q i. And then after processing this under the... using the Turing machine, right? So then you are going to be in time i plus 1. You're going to be in position... You're going to be in position j plus, plus or minus 1. We'll explain that in a second, but that's a really key point. All right? The idea simply is because you could only move the tape one cell at a time. So you can either move it left or right. So if you're in position j, your next row either move the scan head to the left, j minus 1, or to the right, j plus 1. And what the q i goes to q m, well, let's see if given the symbol at q j, right? You're here. You're in cell... This is the symbol that's in position j at time i, right? So now the question is, does a rule exist with... you're in q i, given the symbol... I'll call it a symbol... I need another letter, symbol k, right? It has some sort of symbol. So does... does a rule exist? Was a rule given to you that for q i, s k, going into q m, right? That's... that's what you want to know. Because that's what the turing machine... the non-turbinistic turing machine is telling you. So if there's a rule for that, that you're in state q i, you just scanned in s k, the inputs are all on the... everything's on the tape. So scan... input means that you read it from the tape. And you went to q m, then you're good. And then it may print something else, you know, some other symbol here. Uh... no, I'm sorry. The way we did it, it prints first. Ah... it prints first some symbol, right? Some other symbol, I don't know which one. And then it either moves... it either moves to the left or it moves to the right. The bottom line is that there are at most a window of six cells that you ever have to check. So although this is a really big polynomial times polynomial table, you. .. to... to... to... and assuming the non-terms the computer filled it in for you, you never have to check more than six cells at a time. Constant amount. So a non-turistic Turing machine guesses what should be in every square in the above table. Due to non-determinism, this machine guesses correctly, but you need to verify it in a polynomial amount of time. But there are only a polynomial number of things you have to check to make sure that the table is consistent with the Turing machine. And that each row of the table falls directly from the previous row of the table according to the transition rules of the corresponding Turing machine. Now, this table, lines 45 to 51... I don't know... give me a second, I don't know why I write... why I said 70 to 76, but anyway... 45 to... let me say 51. The above table would also be a proof of Savage's theorem. That non-deterministic space is a subset of deterministic space squared. For any function greater or equal to n. Savage proved it, his version proved it for greater or equal to log n. But we will be able to show it for anything at least n. The reason is that you could deterministically solve the problem. Once you're given all of this, you could deterministically solve the problem. And if you're not worried about time, there's only a finite amount. It's exponential. But there's a finite amount of things that could ever appear in this table. A number from zero to poly can only appear in the first column. One to poly can only exist in the second column. Zero through F can only appear in the last column. And then only the letters of the alphabet are blank. Each one being finite. Now, what you'll do is consider every possible filling in. There are an exponential number of them. Two to the power of poly. Ways of filling it in. Or sigma to the power of poly. But the space class doesn't care about time. So this table will show you... One of those will show you the correct solution to the problem. So non-deterministic space of F is a subset of deterministic space of F of N squared, where you don't care about time. Finally, one last one we'll conclude here. The Immermann-Shalepsini theorem states that non-deterministic space classes are closed under complementation. Fine. And that's just something you should be aware of. All right. A specific case is where it's log, so NL equals co-NL is its complement. Okay. We're not going to go through the proof. They took... If you remember, I said here on line 123 that Savage had to do something in order to even show that this applies where you only have log N space. Well, the way he did it, Immermann-Shalepsini was able to extend it here as well about the complement. All right. That's the end of the class. Of course, I will email you the notes later. If you didn't have a chance, kindly print that you're present. And we will... Again, the last new material lecture is next Monday. And a week from today will be a quiz on not many topics. All right. And then we'll just go over briefly the topics on Monday. But it's... You know, it's the last, I think, four or five lecture notes. Okay. All right. Thank you all. And we will continue on Monday. On Monday. Hello. Hello. Thank you.