Good morning. Kindly text that you're present. And then I will thank you. And then I will stalk the board. Okay, yeah. Let's stalk the board. Share screen. Okay, let's find the notes. Today is February 9th. February 9th. Here we go. Good. Share. Excellent. Hi, Professor. I just had a very quick question. Yeah, sure. On Brightspace, there's a deadline for the multiple choice questions, the first part, but the... No, don't worry about it. Just continue to do the work. It's a great question. Thank you for asking. Continue to do the work. I will email the class when things are finalized, and all the deadlines will be updated, so you'll have enough time. So just continue with the work. Don't worry about the deadlines right now. Okay. Thank you so much. Yeah, thank you for asking. It's a good question. Thank you. There were technical reasons that it's not worth it to go into, but why it got delayed beyond that which I wanted. But anyway, the buck stops here, as they say, so I can't use any excuses. But thank you. Yes, thank you for asking. Okay. So what I'd like to do today is to sort of, well, complete our definition of non-determinism. We sort of discussed this. We started discussing this last class. And non-determinism is a fascinating concept in that it's a theoretical concept. In other words, in and of itself, in its entirety, it doesn't exist, but yet the computer theorists in the 1950s, really late 1960s, late 50s, early 60s, were willing to consider this. And it's a fascinating exploration. But it does have practical applications when you want to consider how to implement it. Again, when you log in for the future, do audio only, maybe. Hello? I've now learned how to shut off phones. Okay, good. Okay, so the point is non-determinism. Fine. What does determinism mean? So, determinism in the context of computer theory and programming means patterned after the mathematical notion of a total function. And a total function is well-defined everywhere. Every input has an output. Every input has a single output. Every input has a precise response that you know ahead of time what's going to happen. It's well-determined. F of three equals five. F of six equals eight. It was defined that way. You don't decide it on the fly. The definition of your function, F, defined it that way. And that's the deterministic program. And we give it such a fancy name. We just call it programming. We're so used to it. But we only give it that extra word, deterministic, to differentiate from non-deterministic. What does it mean to be non-deterministic? Non-deterministic means that you have a current situation. You're at a certain command. You have a set of values in your memory, which is the command will be interactive. The CPU will be interpreting the command and executing based on those values. And it will determine what, if need be, to adjust the memory, to update it with the current value, as well as the control unit in the CPU has to determine which command to execute next. Okay. If it's non-deterministic, so one of those two or both are not clearly defined. Meaning, it's not clear which command to take. Based on the current command and the current values in memory, we don't know. The programmer did not know which one. They knew that if you take one of two pathways, one of them will lead to success. But they didn't know which one. My brother, it's not important, but it's a cute story, but it's like this in a way. But my brother was taking physics at a certain university. And the professor was friends, was colleagues with a Nobel Prize laureate. And I forget which one. Anyway, so, and they worked together in the lab. And the professor told the student, told my brother, maybe it was private, I don't know. Anyway, that they came up with the question, which eventually got the Nobel Prize. And they had two approaches. They had two approaches how to do it. And then one decided to take one way. One decided to take the other way. And only, obviously, not the professor my brother had. The other professor led to the true answer. And it led to a Nobel Prize in physics. So there are two pathways. You know one of them is going to be successful. You don't know which one. And as such, now, what are you going to do? So you may flip a coin. And there is... I'm not saying that that's a reasonable approach, but, you know, there is a notion of random algorithms, probabilistic algorithms, that does exist. Even in computer science, there are such considerations. Anyway, what are you going to do? So non-deterministic came up with a really wild concept and said that, you know what? The computer is going to be intelligent enough. It's not. But it's going to be intelligent enough to figure it out. Now, so... and that's it. It's going to make the decision for us. And we... Now, it's going to make a decision for us. But rolling the dice and flipping the coin and randomly... random number generators could also make a decision for us. You know, which value to take, which path to take, which command to take, right? It could do that also for us. So it could only make sense in the theory, and this is what they were reasoning, that we guarantee it makes the right choice. It always makes guarantee to make the right choice. Unbelievable. Now, as such, while that sounds so futuristic, Turing realized that you could actually always guarantee it also. And using the modern concept of a thread, whether it be a lightweight thread or a heavyweight thread, the difference is whether they have their own independent memory pool, or if they're independent, that's called heavyweight. If the different threads, the different processes running, if you have multi-core, multi-CPUs running at the same time, and they're all using the same memory core, the same memory storage, then that's called lightweight. And if they have independent memories, it's called heavyweight. The point is that you could... Turing understood that, well, if you have enough memory and enough CPUs, let each of the CPUs try one of the possibilities, right? You don't know one of four commands which one to do. You know that one of them will lead to the right answer. So Turing understood, if you have the power, parallel processing power, so give one a CPU to one with its memory, or to another another, let them run independently, right? And then one of them will get the answer. And whoever gets the answer first, you know, you just raise your flag, and you know, hey, we're done. Turing understood that. Now, Turing didn't actually consider parallel processing, but he did simulate it. He actually used a three parallel memories, a three-taped machine, and he shows how to simulate doing this. Sort of like what in operating systems you would call a round-robin policy. You give a few moments to one of the choices, one of the choices, and then you go sort of through it. Whatever Turing did. But he realized that. So what the point, that bottom line for Turing is that he knew that as wild as this concept of non-determinism is... Ah, so non-determinism, hey, if you already discussed that, I said before that I thought it was the 60s. In terms, it's actually earlier. It had to be the 20s and 30s, because Turing already is considering this. So Turing understood that anything that could be specified non-deterministically could be achieved deterministically. So then what's the point? If you could simulate non-determinism, we could do it today by parallel processing. Turing did it by round-robin policy simulating it with multi-memory cores. Whatever version you used, he understood you could do it even with our computers today. With any computer, even back then, the abstract Turing machine could do it too. The point is, how much time will it take? To Turing, everything boils down to time. So how much time will it take? Right? So now you could simulate, but it could take an exponential amount of time. It could take a really long amount of time. So the non-deterministic concept could not, if it were to exist, in theory, it had to make even a more bold assumption. And that bold assumption is that the computer instantaneously could figure out the answer. Instantaneously, constant time. It will immediately figure out that this is the pathway to take for success. This is the value to take to get the solution. Immediately, it'll figure it out. Now, so one thing I should add, which I have to mention, very important, which is that even in Turing's consideration, and that's the part that I was saying in the notes above that I quoted Dijkstra and Floyd, because Dijkstra, based on Floyd, was asking just a simple question. What would the programming language look like? You know, we talk about Turing's non-deterministic computer, but we're programmers, so we understand what programming on a deterministic computer is like. So what would programming on a non-deterministic computer be like? And it's relevant to what we just said. But so he came to the conclusion, you know something? It's really the same thing. You have Java on deterministic programming. Java available on non-deterministic, for non-deterministic programming. The difference is one, you only need one command. That's it. And on line eight in the notes, choose a value for x from a set of values, x1 to xn, such that p of x is true. Choose one of the following values. The key is that for this command to work, you, the programmer, had to list all the values, or the range of values. You had to tell the machine that you know that there is an answer in this range. I just don't know which one it is. I don't know which of the following ten values will get me the answer. So I just know. I pruned the search. I determined no other value could be the answer. These must, one of the ten, must be the answer. So, and that's it. So, the command will say to the computer, please choose for me. Okay. That's really important to understand that point. That you, the programmer, have to provide the non-deterministic machine the list of options. And then the computer, according to non-determinism, will decide it for you. And as we just said a moment ago, it has to determine it in instantaneous time. Right? Otherwise, it's just not worth it, because Turing realized you could simulate it deterministically. Now, if you happen not to give one of the true value, let's say, again, on line eight, in this non-deterministic program, choose, you tell the computer, please choose for me a value for X from a set of values such that P of X is true. Right? That's the verifier to know that we got the correct solution. And you, by accident, typo, or maybe a logical error. You didn't give one of the values. In other words, you didn't give the true value. You gave other values. So, like, so on line 12, garbage in, garbage out. You can't expect the non-deterministic computer to figure out a true value if you didn't, this is the model, the model of computation, unless you gave it to the computer. That's an important point. Now, so the, believe it or not, now this is, if you've heard this before, so this may not be as surprising to you. If you read the note, or read the notes, anyway, it's not important. The point is, because it's a wild, they took the theory another step, which is even more interesting, more wild. It's anecdotal, but it's in the, it exists in the theory. So they had a problem in this following sense. They wanted a precedent for non-deterministic machines. Well, how could it be that the computer instantaneously tells you the answer and it's guaranteed to be the right answer? So again, you may think, well, highly parallel computing. No, that's not the model they were, or their understanding of non-determinism. Their understanding of non-determinism is that the computer just knows the answer. The computer doesn't think about the answer. It just knows the answer. So they said, what precedent is there? So I got to tell you this, because first of all, it's hilarious. I think so. But it's hilarious that they, that these highly theoretical scientists came up with the following precedent. This is really true. But don't guess. What I'm, what I'm trying to say is that I'm going to say part of it and you're going to think one thing. Don't. I have to say two parts to give the full story. And this is true in the theory. So they came to the conclusion. Well, there is a precedent or we'll use a precedent as follows. You contact a prophet. P-R-O-P-H-E-T. You, I'm not kidding you. This is the theory. This is theory. You're not in philosophy 101. You're in computer, advanced computer theory. They said that you contact a prophet and the prophet will tell you the future and the prophet knows the future. So that's how you do it. Simple. Now we understand non-determinism. This is really true. Not kidding you. That's part one. The part two is, so that you shouldn't come and guess, they decided to tell you which prophet to talk to. So they talked to the prophet. Actually, it was a prophetess. It was a lady. And it's written in history, in the literature. She was the prophetess at Delphi. So is that somewhere in Greece? Ancient Greece? I don't know. Anyway, that's the one they chose. And you may know her from the literature in that she had always a crystal ball. And the crystal ball was called, or she was called, not a little bit unclear, the Oracle. Huh. Wow. That'd be an interesting name for a computer science company. Huh. I wonder. Is it taken? Does anyone know of the name Oracle? Anyway, so non-deterministic computing became known in the 1960s, not now. Already in the 1960s, it became known as Oracle computing. That was the equivalent of non-deterministic computers, because that was their metaphor. That was their precedent that they were looking for how to relate to it. Now, because of the fact that you're asking an outside source, another name was given to this type of computing called relativized computing. I have it in the notes in a second. The reason I didn't pull the screen down is because I didn't want to give you the punchline in case you haven't heard this before. Anyway, relativized. Relative to. The point is that your decision is obtained relative to consulting an external source. And whatever that source is, then you would be relativized relative to, your computation is relative to the external source. Some source that you normally would not have. So that's the model of non-determinism. And Turing understood that you could simulate deterministically. The bottom line is, how many lines does it take? Okay? So that's what I have here. Oracle computing or relativized computing on line 28. Okay. I include the following, which is really, really interesting. I find it interesting. I've mentioned this to you before. I include it because I think it's a caveat emptor type of lesson. And I won't take too long, but I have an interesting anecdote that I hope you'll like. Anyway, so here's another caveat emptor. A buyer beware. And it's a refrain I've possibly used before and I use every so often. Which is that just because two different disciplines use the same term does not mean they have anything to do with each other. So every, I don't know, 10, 20, 15 years in the philosophical blogs and literature, someone comes up. It repeats itself. It's like a repeating loop. Someone comes up from philosophy and asks the following question. Since we know that computer science has the concept of non-determinism, so don't you see that the computer scientists have a model for free will? In other words, non-determinism is free will. That day that the computer decides on its own. Right? It didn't come from our programming. And we just said that it didn't make... It's not that it's logically determined. It instantaneously made a decision. Of course, our assumption in non-determinism is it always makes the right decision. Free will doesn't guarantee that you will make the right decision. It just says that you have the right to make a decision. But not that you will make the right decision. Anyway, so they come up with this analogy and they always use this as a proof. I've seen this in the philosophy blogs and literature. Anyway, we're not philosophers. But I want to mention a very important point why it's not true. The association is not true. In order so that we have, again, to get a deeper understanding of what I said before. So the analogy I want to use... Well, it's a true story, a true situation. So there used to be a TV show apparently called West Wing. Martin Sheen, Charlie Sheen's father. He played the president. And the president's name was Tobias... I forget. Anyway, Tobias something. Anyway, so that was the character name for this president in the show of the West Wing. The West Wing wasn't as much about the president. It was about the... it was about the chief of staff. It was how the White House runs more than how the president runs. But obviously you need a president. The show ran for seven years and stopped because the eight years were up. And usually the last year, the presidents are sort of winding down. And they were afraid their scripts wouldn't be as good. So they stopped after seven years. Anyway, what happened? So if you have seven years, one of those years will be an actual presidential election. Maybe it was the one that Al Gore was part of. I don't remember. Anyway, you can look up what years it was. So there was one of those years. I don't remember. Not Al Gore. Anyway, whoever it was, you can look it up. So there was some presidential election. Now, the script apparently for the show was really good. In the sense that the president and vice president who were running the show, the White House, they were really good people. And they were sort of appreciated across the aisle, as they say, from all political parties. So what happened was, apparently, and it could be true in every state, but I don't know. I didn't have a chance to look it up, the legal, the law about voting. But the point is that, in theory, in many states, maybe all states, I don't know, you could go to the ballot box to vote for president and say to them, I don't want to vote for anyone that you have on your list. I want to write in my own candidate. I want to vote for myself. So legally, possibly every state, certainly in many states, you could do that. You have the right to go to vote and say, I don't want to vote for the ones you told me that are running. I want to vote for someone else. So somewhere in the vicinity of 70,000 people wrote in this character, Tobias, I forget the name, whoever that character was on that show being played by Martin Sheen, the West Wing. Okay, what's that got to do with our discussion? Very much. The ultimate display. Now, this doesn't necessarily mean, again, remember what I said before, caveat, warning. Just because you do the extreme, the best display of free will doesn't mean that you will necessarily, it's necessarily a good decision. But, in and of itself, the following is the ultimate display of free will, which is, normally, and this is deep in life, we're given a certain set of parameters, certain expectations, certain cultural norms, and you make the decision, I want to do something different. I want to decide and choose something that's not on the list. It's a choice that's not given to me. I'm going to do it anyway. So, again, that may not be the best decision, but it is the ultimate expression of free will. That you made the choice, even though it's not on the list of choices. And so, that's the key distinction between non-determinism and computer science, and perhaps what philosophers would like to discuss about free will, I can only imagine. Which is the following, that we said above, that here, the choices must be given to the computer. And the computer must choose from one of those choices. How it does it? So that's this theoretical concept called non-determinism. But it's not the ultimate concept of free will. And that's very important. And to the extent that we said above, so again, the non-deterministic computer can only choose from the choices that it's given. It can't make a decision to choose something else. And that's even if the case, in the case that garbage in, garbage out, that the programmer did not provide all the choices. In other words, the listings, the choices that the programmer gave the computer does not lead to the solution. Well, then the non-deterministic computer will not figure it out. It only can figure it out if the programmer gave, as one of the choices, the true choice. So that's a very, very important point that you have to remember when dealing with non-determinism. Okay, then the...okay, you can parallel...computing mentioned that illusion I mentioned before. The other aspect, just with a little bit of an irony here, which I have on the bottom of the notes, now on the top of the notes, is the following. That in analysis of algorithms, CS3 23 or 700, in analysis of algorithms, we make the assumption, it's assumed, that all basic built-in commands are constant time. So that the read line command, print line, plus, divide. When all these basic commands are...that are not control structures, they're not loops or not recursion. So all those basic commands are constant time. Now, the way Dijkstra, based on Floyd, presented that really non-deterministic Java, for example, is identical to deterministic Java, except for that one command, which is, you tell the computer, choose one of the following for me. Right? But that's a built-in command. And we just said it takes the computer constant time. The irony is that if you were to do analysis of algorithms on a non-deterministic program, it would come out that the complexity has nothing to do with the non-deterministic command. Because, as you know, in analysis of algorithms, you're going to drop all constants. So...so the fact...it's just...I think that's a little humorous in a little bit, but nothing you can do. But do understand that, right? The non-deterministic command is just constant time, and constants get dropped according to analysis of algorithm tenets, you know, beliefs, so that it's just going to turn out, it's not going to affect the complexity itself. All right, anyway. A very important, and if you read different literature to keep this in mind, colloquially...and I write this not just because it's...yeah, someone gave another. ..said something. It's just that it's repeated many times in the literature. Colloquially, the difference between deterministic and non-deterministic computing is as follows. It boils down to the following. The complexity of a deterministic code is based on the number of steps, instructions, that the code will actually require to solve the problem. The complexity of non-deterministic code is based on the number of decisions, right? What it hinges on is the ability to make decisions in constant time. The ability of a deterministic code is to follow the instructions, okay? Anyway, so that's an important perspective that I think is colloquial, but I think you should keep that in mind. So the question, as we said, it all boils down on line 60, it all boils down on how fast that decision could be made. So we're gonna head towards a discussion based on Cook and CARP, exploring an important class of problems from the deterministic and non-deterministic perspectives. So this little piece, just we're gonna be intro... Well, we'll be discussing more and then leading into the CARP paper. As I mentioned at the end of the last class, Jack Edmonds, who was a friend of CARP, differentiated the difference between easy versus hard as to whether the time complexity... Oh, easy versus hard in the context of basic computing. It's relative to a class. In other words, anyway, he was referring to the type of problems that... Well, relative... Oh, this is a little bit tricky. All right, let me leave it as is. Jack Edmonds in the 1960s differentiated easy versus hard as to whether the time complexity is polynomial, in which case it's easy, or exponential, it's hard. This distinction was accepted by Cook and CARP for polynomial-timed algorithms. It sounds a little bit redundant, but for the space of whether it's deterministic or not, deterministic and non-deterministic time algorithms. Oh, space of polynomial-timed algorithms. Okay, and this became the question of P equal NP. I'll practice a little bit better. The question becomes P equal NP. So P stands for deterministic polynomial-timed algorithms. That's our default. That's why we don't say D followed by P. We just automatically say P, because that's our normal programming. I think. Second. So the question is that the amount... the algorithms that could be computed in a deterministic computer in polynomial amount of time is that the exact same of problems that a non-deterministic computer can solve in polynomial time, and that's called NP. So P is the set of... of algorithms that could be solved in a polynomial amount of time on a normal computer, deterministic computer. And the question is, is that the same set of algorithms that can be determined in a polynomial amount of time on a non-deterministic computer? An interesting argument about what is easy versus hard was proposed by Hartmanis, a famous computer scientist. I think he was the chair of the Computer Science Department at Cornell for a while. Anyway, but it was not accepted by the literature community. He published this article where he argued that we should be thinking about it totally differently, what's easy versus hard. He suggested that a major factor, what's easy and hard, should be whether you have enough time and space resources actually available to you as opposed to... in order to support the computation. In other words, if I were to tell you... remember, Turing says that Turing ignores time in some sense in that it's only whether it's doable or not, what he called solvable or not. But the fact that it could take a million years to solve the problem, according to Turing, that's still effective. If you could prove that your algorithm could be solved, but it takes a million years to solve it, Turing says that's effective, believe it or not. So that's where Hart-Minus said... you know, so he argued... Hart-Minus argued, but again, people do not accept this, but it's an interesting argument. N to the power of 100 is technically polynomial, and according to Edmonds would be considered easy. But no one practically would be able to support this extreme amount of resources necessary to support it. So Hart-Minus would call this hard. It's an interesting argument. Again, it was not accepted. Now, this leads to an interesting... So again, we said before that the deterministic computation is basically modeled. .. simplistically, you can model it after the total function. Right? A mathematic total function means it's well-defined. Every input has an output, and it only has one output. And it's defined ahead of time. And so every response is predictable. Right? That's deterministic. Now, in discrete mathematics, in other mathematic, earlier mathematic courses, you would have learned about a cousin of total, sort of, which is called partial. And partial, you know, so partial is a very interesting question. I don't know if I have it here. Let me see if I have it here. So let me put it on the bottom. Homework. Homework means think about it, but I don't collect it. Homework as opposed to assignment, which means think about it, and I expect you to get it to submit, to upload to the Brightspace. Anyway, homework. Perhaps you would think that partial functions are not that practical. Right? Because the fact is that partial functions don't have a definition. They could give errors. Right? Where they're not defined, potentially. Right? So the homework is show that on the face plate of a scientific calculator, and you could use a simulator, such as on your computer, or if you happen to have one, a scientific calculator, there are at least eight different functions that are, in fact, partial. Unbelievable. Your calculator has... Now, you may need more than one keystroke depending on which calculator you have. But anyway, by just touching different buttons on your calculator, well-defined, mathematical functions that we use regularly and the scientists use all the time, and we thought they're well-defined everywhere, in fact, there are at least eight different functions that are, in fact, partial. So let's not be... Let's be... Here's my bad pun for the day. Let's be impartial to partial functions. Great. Okay. Now, let's back up a second, because we need to address the issue of what happens when you're implementing a partial function, and whether it comes about or by accident, you get an input where the function is not defined. What happens? What does the program do? So, according to mathematicians, you get an error. You know, colloquially, we used to say computer crashes. So I'll use that term. It's a strong term, but that's what they used. They... It actually... Maybe you don't know this. Certain terms actually come from actual situations. For example, the term to debug came about because the early computers, the memory cores, were actually light bulbs. And a light bulb was on. That was a one bit. And if the light bulb was off, so that's a zero bit. And what happened was that the light bulbs would attract bugs. The light or the heat, I don't know. And it would attract bugs. And that would sometimes cause the bulbs to crack. In which case, what you thought was a one, the computer now believes is a zero. Because now the light bulb is off. So you... So the... Fixing that computer was called debugging, literally. Anyway, that's... That's apparently where the term comes from. And that has stayed till today, then fixing the code. Now that was actually fixing the memory. But anyway, it's... It became the... The calling card for fixing code. So where does the term the computer crashes come from? So that actually was also an event. Uh... The situation was that the early memories were, as you... As we said, light bulbs. Can you imagine how much space it took up? Physical space. Um... There are pictures where you would need an entire gym. Like, literally. A full, legal sized basketball court. Like, Fitzgerald gym at Queens College. And it would be rows and rows of... Of... Uh... Racks containing... Bolts representing the memory. And that computer did less than your watch did. Anyway, so... Uh... So... What was... So they came up with drives, as we know. Um... Which was small. And then a whole, um... Uh... Uh... Whole gym room. But they weren't too small initially. They were the size of a big barrel. And there would be multi-platters. Like record players. Uh... Containing, uh... Spinning, uh... Discs. That, uh... One on top of each other. Which would store the memory. Uh... By that time it was magnetic. Uh... But these digs... These discs were really big. And a little bit heavy. So... From time to time. It... It would... The disc could get a little wobbly. And one disc would crash into the other disc. Causing a cascading fault. And the machine would halt. Because of an error. Because the disc, uh... Crashed. So that's the term of crashing. The computer crashed. So, uh... That's where it comes from. But if it generically came to mean... That you have a, uh... A logical or mathematical... Uh... Function to compute. And there... There is no code. Or this contradictory... However that comes about. Contradictory signals. And the computer just can't react. So it crashes, as it were. It stops. Now, while that may be how mathematicians view... Uh... Uh... A... How the, uh... The function should work. It stops. Right? It just crashes. If we were implementing computer science... A mathematician would think that that's what happens. Computer theory could not accept that. Computer theory, uh... Did not allow that for... For reasons based on, uh... Believe it or not... Something called the unsolvability of the halting problem. Um... And... Which we won't talk about now. Uh... Hopefully we'll talk about it at a different time. And that issue... Came then... Came to the conclusion... That according to computer theory... If a function is partial... Uh... What happens is... The computer must go through an infinite loop. That's a... That's what computer theory answered... In order to... So it shouldn't have problems... With the... What's called the unsolvability of the halting problem. Anyway... Uh... It must go through an infinite loop. Uh... Okay... Let me... I'll put down line seven. Um... Uh... How should we do this? It was defined this way... So... Uh... In order not to have... In order... Not to cause a problem... With... The... Unsalvability... Unsalvability... Unsalvability... Of the halting problem. Okay... Uh... You know what? Let me just quickly mention it... Since we're... It's here... And if you haven't heard about it... Let me just quickly define it... And then we'll move on. Uh... The unsalvability of the halting problem... I'll get back to who's a member... But... I want... It's important that we have this definition... And then... I'll get back to the top of the notes in a second... Uh... Let's see... Uh... Uh... Unsalvability of the halting problem... Uh... Uh... States... That... No... Uh... Algorithm... Slash... Program... Can exist... That does the following... That... Uh... Can... Uh... Determine... For... A... In... A... Find it. Yeah... The嘛... Ha... procent... In... If... In... We're... I... From... So... Ex... Attila... A little driver, some external program. And this external program, the algorithm program, no problem can exist, right? This external program accepts lines of your code, your source code, and takes your provided input, because you would like to know ahead of time, why waste your time? Wouldn't it be nice that some external, you know, monitoring code could determine ahead of time whether your lines of code and a given input will enter into an infinite loop or not. So the unsolvability of the holding problems proves that no, it can't. You can't have it. So, actually, it was proven. You can't have it given our current models of computation. All right. What I was trying to mention about this infinite loop aspect is the following, because it impinges on, it affects a little bit the definition of who is a member, membership. So in discrete mathematics, you studied the notion of membership, right? You have a set, let's say the set of even numbers, and I ask you six. And I say, six, are you, even, are you, do you contain the value six? And the value six says true, the predicate, the membership predicate says true. Then I ask, even, how about the number seven? Is seven a member of your set? And then the membership predicate for the even set says false. No, it's not, right? So the way you studied membership in the case of discrete mathematics, it's a total function, a Boolean predicate, a total function. It knows how to answer who is a member, but more importantly for our discussion now, and now it also knows how to answer who is not a member. So in computer theory, they had to adjust this situation because of the unsolvability of the halting problem. The issue is that you could create artificially a simple set, which is a set of inputs that cause your code to go through an infinite loop, right? So, so though, you know, it's considering, okay, here we go again, but I'll do maybe readjust this. All right, I may mix the first two paragraphs to be in a more orderly fashion, but here's the idea. Okay, consider a set defined for a given program as the set of all inputs such that they cause the given program to enter into an infinite loop, okay? It may be empty if it's a perfect program, otherwise it's a set of numbers, right? So now we have a problem. We just said the unsolvability of the halting problem, but we just said, but you just learned in discrete mathematics that membership is a total function. So based on, that can't be, based on the unsolvability of the halting problem, the membership predicate for this set cannot exist for both who is a member, and who is not a member. No, who is a member... Okay, well, it's a double negative. It cannot exist for both who is a member and who is not a member, right? Now, not a member in this case is easier, meaning we want to know which numbers cause the infinite loop. Therefore, all the other numbers do not. So the membership predicate can answer the question, who does not? Why? Because it could just simply plug it in, you know, act recklessly. Just plug it in, input the number, and see if it eventually halts. Then you know that it doesn't go through an infinite loop. So finding out which numbers do not cause an infinite loop, or rather testing membership as to whether a given number does not enter into an infinite loop. So that's an effective procedure. It may not be an efficient procedure, because it could take a long time, but it is an effective procedure. Just plug it in, try it. If it halts, you know that the answer is yes, right? So clearly, to test whether an input does not cause an infinite loop, does not cause a given program to enter into an infinite loop, is decidable, meaning solvable, or in terms of originally what the term said, effective. All those are slightly different versions of the same thing. Anyway, it is, because simply input, place, use that input, or test that input, simply test that input with the program itself. And if no, and if the program halts, then clearly there was no infinite loop. So the question is, again, what will happen where you don't know the answer, right? According to the unsolvability of the halting problem, if you gave it a number. .. So now, here's the interesting thing. So, if we... So, based on this, it cannot be that if an input that causes an infinite loop were given to an unsolvability tester, a halting problem tester. Okay, so again, the point is this predicate, we're doing subroutines here. We have... we have a code. We're trying to understand the notion of membership because of the following issue. If you consider the boolean predicate of the following set, all inputs that do or do not cause separate sets, do or do not, the complements of each other, do or do not cause a given program to go through an infinite loop. Now, you cannot, according to the unsolvability of the halting problem, you cannot determine those that would do not go through an infinite loop. The question is, if you were given to a program that could determine a tester, let's say you had a theoretical tester of the halting problem, whether or not an infinite goes through an infinite loop, what would that tester, that halting problem testing program, What does this program do with an actual value that would cause some other program to go through an infinite loop? How would it respond? So it cannot be that the input you provided, you provided input x, program y, and then you have your tester p, right? So it cannot be that it would cause tester of the input with the program code that you're testing to crash. Why not? Because crashing is a response. If the program code inputted, if the program code truly halted on... does not enter an infinite loop with input provided, then tester halts. Right? The fact that in this scenario tester crashed tells us that the input does go through an infinite loop. The input does cause an infinite loop. Because if it didn't, then it should have halted, because all you... Right? Does go through an infinite loop. Because the ability to test whether the program doesn't go through an infinite loop. .. You know, those inputs that the code does not go through an infinite loop, it's trivial. Just plug it in and run it, and then you'll halt. But if your tester were to... What would happen if you have this theoretical testing program, and you gave it an input that actually causes an infinite loop? Right? Right? So then it cannot be that the program simply halts or crashes. Because that's a response. And that response would tell you, since it's not a normal halting, that that means that that input would go through an infinite loop. But the unsolvability guarantees us that you'll never know. But unsolvability guarantees us that you will never know. So the only possible response of this tester is that it too has to go through an infinite loop. And in essence, never tell you whether it goes through an infinite loop or not. Right? Because maybe if you just wait a little longer, maybe if you just wait a little longer, it will halt. Right? Maybe only because you waited so long. Ah, it can't be. Well, had you waited just a little longer, it will halt. Maybe if you will wait just a little longer, the program will halt. And you will find out the answer. Right? And here, an anecdotal reference. You can look it up. Seinfeld 4. I don't know if you remember that. Seinfeld 4. They were waiting for a table at a restaurant. And the maitre d' was favoring everyone else because they only were there the first time. Or maybe everyone else tipped the maitre d' of the restaurant more money. I don't know. Anyway, the whole episode, they're waiting, waiting, waiting until finally they give up. And they leave. And then two seconds later, the maitre d' looks at the list of people and realizes there's a table available. And he screams out, Seinfeld 4. There were 4 people. Right? Elaine, Kramer, Jerry, and George. Okay. Anyway. So that's the anecdote for the moment. Okay. So putting this all together, we must come to the conclusion for putting this all together based on this. So based on this, computer theorists would not use the term membership predicate, membership as a predicate, necessarily as a predicate, since this paragraph indicates that there are sets where you cannot determine both who is and who is not a member. Right? Some numbers will cause an infinite loop and you'll never find out. Maybe it is, maybe it isn't. But therefore, computer theorists prefer the name, prefer the label, the name, characteristic function, which is not a great name, but that's what they call it, instead of membership. What characterizes the set? What characterizes the set is who's a member. We're not saying anything based on who is not a member. All right. Long-winded. I thought it would be easier to explain, but anyway, that's the influence of the halting problem. All right. So now comes the question in general. How does computer theory deal with analyzing the perspective of the partial function where sometimes it's not well defined? The context from the previous tab is that we were dealing with determinism, according to computer science. Deterministic computers is based on total functions. Every input must cause an output. Every input is well defined. Right? The question is, well, then where does partial functions fit in? Ah. So this is an interesting question. All right. I go through some definitions here just to remind you of discrete math. So a deterministic mapping could be said as the following. So let's say a function d for deterministic, f sub d. So on line 48, hopefully you're familiar with standard mathematical mapping notation. And it reads as follows. All right. I'll write it for you. It reads as follows. Function whose name is f sub d. Here d is for deterministic. Has the following map. And that's what the colon is for. Okay. What's the following map? The following map is one member of n, which is the natural numbers, is mapped to, and that's the arrow, another member, or the same member, to another, or a, I'll put in parentheses, it's a, another member, or a member, because it could be the same member, of the natural numbers. Okay. So that's what this, this notation means. Okay. f of d maps the natural numbers to the natural numbers. f of d is the name. Here d just means it's deterministic. And we map a, an element of the first set gets mapped into an element of the second set. In the case of deterministic, in the case of deterministic, it's well defined, right? Deterministic adds that the mapping is total, i.e. well defined, as mentioned above, as explained above. Okay. Now, that's deterministic. So what would be, what would be the mapping for non-deterministic, what would be the, how would you just do the same type of mapping for non-deterministic? So mathematically, the non-deterministic computation is modeled by the relation. So as, here, you have a set A and a set B, and the elements of A get mapped to the elements of B. Okay? On line 39, in the case of what's called relations, mathematical relationships, or relations, or relationships. So then, there is no restriction. Any element of A can be, any map to any element of B, or plural, and that's the parentheses, important. In plural S, you can have more than one. Anything goes, no restrictions. Any A goes to any number of Bs. In the case of a partial mapping, any element of A can be mapped to, at most, one element of B. But, at most, it includes zero. But, possibly zero. Right? That's our partial function. Okay? And in the case of total mapping, any element of A has to be mapped to a single element of B, exactly one element, and that is the case of total. Okay? In a future class, we'll calculate upper bounds on the certain spaces formed by these various mappings, and we'll do that for algorithmic reasons. Now, coming back to us, so on line 53, mathematically, the non-termistic computation is modeled by the relation, because at any decision point, the output, i.e., the next step to take, or the response, can be any one of a number of different outputs. Right? Output elements. That is called, in computer literature, that's called a don't care state. The fact that you say to the computer, well, choose one. Right? So you have no particular choice. You did not make a particular stand as to which one to go to, as you did with deterministic. So a non-deterministic mapping would be f sub nd, for non-determinism, where n, the input natural numbers, doesn't go to potentially one number, it could go to any number of numbers, which is the power set of n. Power set of n. Any subset of natural numbers, because in a relation, the non-deterministic computation is more like a relation. One input could get you more than one output. But what about the partial function? So it could be argued, mathematically, it is closer to the non-deterministic mapping. Why? Because the empty set is always a member of the power set. An empty output means that it's a partial function. Right? Power set of n is all subsets. Right? All subsets of n. That includes the empty set. Whereas a line 48, the natural number maps to a natural number as the empty set is not a member of n. So mathematically, it could be argued that the partial function is actually closer to a non-deterministic computation. However, based on what we said above, depending on what your computing device does. Right? So the computer theory says that partial functions go through an infinite loop. We're not defined. But practically, your calculator will say, however, on line 60, partial functions on computing devices tend to give an error message. Right? But this is a predictable response. Right? Just like crashing above, which is why the computer theory couldn't have it. But your calculator does it. You divide by zero, you will always get a mirror message. No matter how many times you try it, it will always be the same thing. So you did not get an output, but you got a deterministic response. It always happens. It's predictable. You know exactly what it's going to do. So instead of a don't-care state, this is called an error state. On line 62, therefore, from a computational computer perspective, it can be argued that the partial function is a deterministic computation. So which one is it? So believe it or not, there are others on line 66, and the author Sudkamp and others in computer theory books say it's a little bit of both. So they call partial mappings partially deterministic. It's a little bit of each, besides the really bad pun, partially deterministic. Okay. Anyway, so that's the case. But it's an interesting discussion, the role of the partial function in the context of the theory. So one last time, according to computer theory, the partial function we're not defined goes through an infinite loop, whereas in practical computation, it crashes. Or you catch it with some... throw an exception and you catch it, et cetera. Okay. Now, the first... What we're leading to is setting the stage for a discussion of what Cook and Karp did, which was to analyze how hard are non-deterministic programs when you give it a polynomial amount of time. So the first problem that was analyzed was called satisfiability. And that was analyzed by Cook in 1971. Okay. What is satisfiability? Satisfiability says the following. You have a Boolean predicate. A Boolean predicate, a function p, x1 through xn. It has n inputs. Okay. And for now, we'll assume, and I alluded to this in a previous lecture note, that although we relax and allow the inputs to be any number, in Cook and Karp's papers, the inputs were Boolean as well. You have zeros and ones. So basically, you're inputting... Boolean logic, Boolean predicates are, you know, and, or, not combinations of inputs. And so x1 through xn have the values 0 or 1. You would like to know for a given predicate p, when is that predicate true? That's called satisfying. Satisfying the predicate means finding inputs, x1 through xn, 0 or 1 sequence of n zeros and ones, such that the predicate is true. Now, Cook showed, and this is unbelievable, that all... Sorry. That all possible NP-hard problems must reduce to satisfiability. That at the core, at the essence of how hard computationally a non-deterministic program is that runs in polynomial amounts of time, how do we relate to it? To understand it, it really all boils down to understanding satisfiability. That everybody, every problem has a little bit of satisfiability in there. So, we'll talk more about that in a different class. Now, Karp argued, then, to show equivalent... Oh, so, ah, no. Here comes the question. It has to do with this word reduce. So, Cook showed that every NP-hard problem, any problem that's as hard as non-deterministic polynomial amount of time programs, some programs or algorithms are harder. But any algorithm that's as hard as NP, again, NP we defined before today, is that all algorithms on non-deterministic machines, they take a polynomial amount of time. So, any problem that's as hard as that, must reduce to the problem, boils down to the problem of satisfiability. So, when Cook did that, in essence, Cook, therefore, defined reduction in one direction. Right? Cook, in essence, is saying that this notion which we're trying to define in this tab, this notion of reducing. What do you mean to reduce the problem? That you distill it, and at its core, is really satisfiability. You thought you were doing a graph theory problem. You thought you were solving linear equations of a certain type. You thought you were doing optimization problems. And you show that your problem is as hard as the NP class. Cook says it all distills down to a problem with satisfiability. And that process is called reduction. So, Cook defines reduction in one direction. Meaning, your problem reduces to satisfiability. Problem A reduces to problem B. Right? So, in general... So, in general, that's how Cook defined reduction. Problem A to problem B. Karp argued, to show hardness, you should show they're equally hard. So, he did it in two directions. One must reduce the two problems to each other. The problem is, in Karp's paper, he only shows one direction. Namely, that satisfiability reduces to other problems. Okay? And he does it for 20 more problems. As I will constantly remind you... So, in the literature, we call this Karp 21. That's a misnomer. It's not accurate. Not 100% accurate. Because it's Karp 20. Cook one. Cook one, Karp 20. But for some reason, everyone forgets Cook. And we say Karp 21. The satisfiability discussion was not Karp. He quotes it. It's Cook. The year before. But anyway, Stephen Cook. But anyway, people still call Richard Karp's work Karp 21. Whatever it is. So, the question is that Karp only shows it in one direction. Why does he only show it in one direction? He only shows in his paper that satisfiability reduces to other problems. So, the answer is because Cook already showed the other direction. Cook showed that every problem, every NPR problem reduces to satisfiability. Karp is showing... So, Karp said, let's show both directions. So, Karp only has to show the other direction. Cook already showed one of the directions. Cook showed that your problem, if it's MPR, reduces to satisfiability. Karp is trying to figure out satisfiability reduces to which problems. So he only has to show one direction because Cook already showed the other direction. So, so far we have two definitions in general of the notion of reduction. You see, although their definitions came about in the discussion of MP, computer theorists took this notion of reduction to other classes. And I mentioned this in a previous note. Hardness doesn't only deal with NP class. Well, that's the most popular one to discuss it with. But you can have it for any complexity class C. And there's like a thousand different such classes. So, NP is just one. It's a major one, but it's only one of many classes. So, the notion of reduction applies elsewhere. So, so far we have two definitions of reduction. Cook says that if you want to show reduction, you only have to show that your problem reduces the satisfiability. And then we will say that your problem is as hard as the MP class. Karp says that's not enough. You have to show the two problems reduced to each other. And then you could say your problem is as hard as the MP class. Okay, interesting. There was a third definition, which we will not use, but there was a third definition of reduction, which was given by Levin. Based on the fact, as we said, that non-determinism is relativized computing. So Levin said that reduction is about seeking an external piece of information, or source, that will help you solve the second problem given a first problem. The hardness is then how hard is the cost of obtaining that missing piece of information. I shouldn't say necessarily based on, but he was inspired by. Maybe, okay, I'm going to get a little bit looser. Inspired by the fact. Okay, so to Levin, it's all about the relativized. You call your problem hard. If you only had a piece of information and you think about what piece that should be, that will help you solve your problem, then you wouldn't think your problem is so hard. The question is, boils to that in Levin's mind, is the cost of obtaining that missing piece. So that is Levin's notion. So we have three definitions of reduction, and we will continue with this discussion of reduction next time. Please, I make reference here to Karp's paper. So I'm going to leave this in the notes when I send it to you, although this is going to be... Here, let me put a note. The following is for the next lecture, but I'm leaving it here because I would like you to look it up. So that way it will be easier when we discuss it. The following is for next lecture. Okay? The following discussion. So, all right, it's the end of class, but please, I make reference to Karp's paper. Look it up. Very simple, but I want us to go through this. All right, so as I say, if you didn't have a chance... It's the end of class, so if you didn't have a chance, kindly text the word present to the chat. And... Okay. Kindly use the word present for the future, and thank you all.