Give me two seconds. Let me just see if I could change it. I don't know. All right. We'll worry about a different time. One more setting issue. Let me just do a setting. No, it doesn't have it. All right. I don't know. There are settings here, but it's not applying. Okay. Let me do the share of the board. Share your screen. Window. And let's find the right one. Here we are. Share. Okay. Nope. Wrong one. Well, it's related, but... Okay. One second. I apologize. I have a lot of files open. Window. What's the... Oh, here we are. December 20th. January 28th. All right. Sorry about that. Okay. Good. Good, good, good, good. Good. It's on the screen. And hopefully everyone can read it properly. All right. Again, one last time. If you walked in, please, and you haven't done so, please text the word present. And that's in lieu of, what you call it, in lieu of attendance. Good. Now, thank you all. Okay, great. So, let me see if... Yeah, that works. Okay, good. Let me talk a little bit, just to remind you a little bit about admin stuff and administrative stuff. And then we'll talk about the notes themselves. So, admin-wise, I emailed you last night. I emailed about grade codes. So, kindly, that's very important. On the bright space system, space system under quizzes, you'll find a quiz active called grade codes. A few quick comments. A few quick comments, as I mentioned in the email. One of them is, I use the quizzes format for various submission of projects and stuff. I use the quizzes format to collect student responses, but not like it counts as a quiz. Not necessarily counting as a quiz, and this doesn't count as a quiz. There are no specific points you get if you did this quiz, but the counts as a quiz, but not doing it does get a penalty. So, you have to do it. But not doing it gets a 2% penalty off total grade at end of semester. Okay, so please do it. You have a lot of time, but it takes literally... takes literally one less than a minute to do. It's just simply will present to you some randomly generated strings. You select one, submit, write down that string. Do not tell anyone else... do not tell... share... do not tell share that string that grade code string. Okay, grade code string with anyone else. As grades will be distributed using it... will be distributed using it. So basically, what I'm going to do is... okay, there's another administrative point. I use Brightspace for collection of data of student responses. Okay, to anything. And uploading of assignments when they're due. But I do not use it for grading. Grading, I manually do. After downloading the collected data. After downloading collected data. And then email the class, as I just said. And email the class with the actual grades. Again, based on the grade code. So based on the grade code selected. Now, you... in case you don't remember your grade code at some point in the semester... the grade codes will be available the grade code selected, or grade code... it's one each the grade code selected will be available to you... throughout the semester throughout the semester... Under the same place, under the Brightspace system, under the Brightspace quizzes. All right, so I will make the data you selected available to you for the grade code throughout the semester. Okay, so that's that story. That's in terms of that. There are other usages I have for it, but it doesn't apply to you. So this semester. So let's not worry about it. Okay, is there any other thing admin? No, it doesn't affect us. We have a faculty meeting later today about what type of courses we will be able to give during the summer and next semester. But that doesn't affect us right now. Okay, so let's go to notes. All right, so computer theory. Originally, it was not concerned as much with how much time a algorithm would take, but rather whether it can be done in the first place, which Turing termed an effective computation. So that's very important to understand. In other words, analysis of algorithms, of course, 323, 700, whichever version you're taking. Analysis of algorithms is very much concerned with the amount of time and space that an algorithm will require in order to properly execute. Okay, now we should mention, although I think I've mentioned it below. Let me see. I think I do. There's some... as long as we're mentioned again, let's quickly state. Here we are. I say recall, but that's on the assumption that you've taken 323. That may not be the case, the analysis of algorithms course. So actually, it's probable that your teacher in discrete math briefly mentioned it, regardless. Anyway, recall that in analysis algorithms, they employ two implications for categorizing complexity. The complexity being a formula for how much time, one type of complexity, and a separate one for how much space. A point that we mentioned last time, I'll sort of elaborate slightly today, is that Turing tried to marginalize space. But that was only in terms of the ability to free up his mind to create a broader understanding of time. But the fact is that space has a tremendous impact, in theory, on time itself. It's sort of the fourth dimension, right? So if any Star Trek fans out there, space, the final frontier. So you have time and space. So space may be marginalized. And in practice, today, space is marginalized, because we have so much memory. Very few people, very, you know, in some intensive computations and things that generate a media, which generate lots and lots of data, graphics data, etc. So then space could be an issue. But for most algorithms that you'll ever do, space is not an issue. So most people aren't really so concerned if you lose space. I know, I think a colleague, Dr. Waxman, used to give a course on how to optimize time and space in programming. And he used to do it in C++, dealing with pointers, dealing with the underlying addresses. Anyway, the bottom line is that all of that, which is quite useful in practical programming, was not a concern for the theory people. And even in analysis of algorithms, which should be concerned with that, it wasn't necessarily concerned with the specific exact amount of time and space. Because it was more interested in comparing different algorithms to get a sense of who is more complex than the other. And although, you know, 2x squared is more than 1.5x squared, or x squared plus 7x is maybe more than x squared plus 6x, and even at the lowest point, plus 5 is greater than plus 4. And therefore, in the analysis of algorithms mind, that was sort of similar. That was in the same bandwidth, as it were. So they came up with two simplifications, which we should talk about, because at some point, when we do some algorithms, we need to understand why we're doing these complexities, or how they're defined. Anyway, once you've figured out a model, a true mathematical model of your algorithm, so they tend to drop lower-order terms. What that means is, for example, x squared plus 7x plus 2, you ignore the 7x plus 2. Okay? So, as an example. And the argument there is... Here, let me just throw that in there. So, for example... For example, if time complex... time or space... Again, generally we don't consider it, because... Well, we're not as concerned about it. If time or space was modeled by, let's say, x squared plus 7x plus 4, so then 7x plus 4... Okay, so then... Let me just put the word then, so it doesn't look like part of the formula. 7x plus 4 are the lower-order terms. Lower-order simply means lower exponent. Right? So, x squared is power 2, x is power 1, 4 is technically 4 times x to the power 0. So, that would be dropped. Okay? So, this would leave... Actually, to make our example more complete... Let's put a 5 there. This would leave 5x squared. The shift 6, the little upside-down v... Is a point... Sometimes, well... Some people call it a pointer. But it's an acceptable format for saying power. Raised to the power of. Okay. So, this would leave 5x squared. 5x squared. Order of 5x squared. Just one second. Order of... Let's say it's complete. 5x squared. So, that's the first simplification. And the argument there was... As the length of input approaches infinity. So, as the length of input for x... Well, x in this case... Approaches infinity. So, then 7x plus 4 falls by the wayside. Small order terms are negligible in comparison. So, therefore, it's... In the long term, it's not considered. In other words, the complexity is supposed to consider all possible input cases. Then... Number 2. Having done that, analysis of algorithms complexity drops the constant in front of the highest order term. Standardizing all such complexities as having constant 1 in front of the highest term. Okay? Sometimes called normalization. But the argument there has to deal with the reason given in the theory books. The reason given is what's called speed up theorems. Without going through fancy theorems about how to speed up your algorithm. The colloquial understanding is the following. And that is if you tweak the speed... You shouldn't do it. But people have tweaked the cycle time of the CPU. So, the point is that, basically, they didn't want a situation where somebody has an old computer and someone has a faster computer and the chips... You know, the algorithms are similar, but one goes faster than the other, only in the fact that it's a... Just a constant times the other. It's... It's... Conceptually, they viewed those algorithms as pretty much the same ones. Although, one is taking more leisurely time, or repetitive analyzing. Anyway, regardless of the reason, they dropped the constant in front of the order's term. So, here, the complexity will be... Here, the complexity would then be stated as order of x squared. And we write that, as you know, order of O for order, basically. Although, there's a technicality, maybe a different time we'll talk about it. You should know that from... Yeah, I write it here about the speed up factor. Anyway, all right, so now... So, the point is that although analysis of algorithms was interested in that, computer theory was not. So, what did Turing mean when he said effective computation? He wasn't saying that it was an efficient computation. Something I'd differentiate later. I'll put it here as well. As opposed to an efficient computation. So, an efficient computation would basically mean that you were very careful about the resources. You didn't waste your time. You didn't waste your space. You didn't waste your time. To effective, though... To Turing, effective was an issue of solvability. Well, here the word former and... Okay. How much time it would take? Complexity. Terms effective computation. Solvability. Solvability. Let me... So, you could read it a little bit easier. So, that's effective computation. And that was called solvability. Is a problem solvable? Now, you've got to understand here, very important, Turing didn't actually care in his theory whether you solved your problem. What he means, it's sort of by definition. In other words, if you give code, Turing doesn't care what that code does. And Turing isn't concerned with whether you're actually solving any problems. From Turing's perspective, the purpose of the code is enough reason to study it. And in other words, the code is its own component. It's its own essence. The question is, does this code conclude? Does this code run? Or are there problems in the running? So, that is what Turing means by an effective computation. Turing doesn't care about the resources, how much time it takes. Turing doesn't care really about how much space it takes, as long as the algorithm, the program eventually stops. If the algorithm runs until completion, that was effective. And that is called solvable. If it doesn't run to completion, but it has errors along the way, or... It's very specific errors. Turing was not even concerned about whether, you know, your program crashed. That's an interesting point, which we'll talk about at a different time. But the effectiveness is limited to... In other words, we're only concerned, very focused, limited to, whether the code, the program code, will... Well, technically, will halt, stop, after completing whatever number of lines of code it was supposed to. The reason I have to write it this way is that we know about throwing an exception, right? Different types of classes for errors, catching errors, try catch, right? So we know about that. So in theory, those errors will stop a program. That's not effective. Ignoring the fact that an error was made. That's, again, Turing was not concerned with that. But Turing was only concerned whether the program runs, it stops under normal conditions. And that is called effective. That's called solvable. It's irrelevant to what it does. The fact that something has an error in it, in Turing's mind, believe it or not, that may also... be effective even though it's not correctly doing your code. Anyway, we'll talk about that a little bit later. All right. Anyway, the bottom line is, one more time, just to make it... I know this is a little bit... You're not used to this description and it's a little wordy. But Turing had a totally different perspective on programming than we do. So, just one last time. You're a programmer, you write code, and you take an input, and the code should run. If the code runs till its completion, then that is an effective computation. What we consider errors were not concerns by Turing. The only issue to Turing is whether your machine stops or not. Now, if the machine does not stop, then, in fact, that would be... The only possibility, the only other consideration, or the only other situation, right, that Turing was concerned with is if your code goes through an infinite loop. That's what we mean by stopping and not stopping. If your code goes through an infinite loop, that was ineffective. That was unsolvable. Then your code is, well, ineffective, although he didn't call it that. But that's the opposite of effective. But what he called it is unsolvable. Your code is unsolvable. It's solvable if it completes and stops. It's unsolvable if your code causes an infinite loop. Okay, so let's keep it like that. So the issue of analysis of algorithms is complexity. How much time it takes. Turing didn't care in his theory about how much time it actually takes. But rather whether it takes, you know, whether it stops and you could clock it or not. If it doesn't stop, that's unsolvable. If it stops, it's called solvable. Technically, I guess since I said it before, let me write it in. Technically, what we call the program crashing. Or throwing an exception, catching an error, or not. Right? But what colloquially is called, what we call the program crashing. To Turing can still be considered effective. That's a very important point. To Turing, this could be called effective since in fact, it halts. The machine halts. It halts for the wrong reason, but the machine halts. That's a pretty wild concept. It's really very different. We're not dealing with software engineering. We're not dealing with practical programming. It's just a question in his mind whether the machine stops or not. Now, we'll talk about it a little more. But what happens when the machine stops? But that we'll have to talk about in a second. A little bit later. All right. Now, so what's an algorithm? We know an algorithm is a finite sequence of steps. Okay? Finite sequence of steps to solve a problem is called an algorithm. So it's interesting that we know that critical systems are supposed to run continuously and must contain infinite loops. So then we just contradict ourselves. Isn't that a paradox? For example, let's say what are continuous, critical continuous systems? Let's give some examples. Let's say a medical monitoring machine. A nuclear reactor temperature monitor. Right? What else? Even elevators. Elevators in the building. Right? You don't want them to ever stop. There are many very practical down-to-earth. Oh, yeah. Another one. What you call it? Airplane control. You don't want that the... All of a sudden the... Oh, yeah. Another one. Radar. Okay. I think you're getting the idea here. You don't ever want them to stop. In fact, as long as we're talking about this, we should mention, very interestingly, Y2K. Which stands for year 2K, which is equal to year 2000. And that literally is the calendar year 2000. So what happened? Probably most of you know. Maybe some of you don't. The Y2K problem. So this happened, wow, 26 years ago. That's a long time. Okay. What happened was that the Intel, Intel manufacturing company, Intel made boards, computer boards, and chips, more importantly, for IBM personal computers, and set the dates based on the last two digits of the year. Last two digits. So 1999 would be stored as 99. Okay. Admittedly, the reason they did this was to save money. The reason they did this was to save money in that only two bytes had to be reserved and not four bytes. And that two, well, something like that. And that less memory would be necessary to store a date. Okay. Now, we shouldn't blame... It's true that this is what happened. But don't blame Intel. The reason is that they did this in the... They did this in the... Intel did this in the 1980s when they were making these boards. But they did not actually believe that their boards would last to the year 2000. So this issue that came up was the penalty of success. They did not actually believe that these boards and chips would be used by people even through the year 2000. They figured that people are going to update their machines and they themselves will update their boards and their chips. But believe it or not, as I say, the penalty of success, people still were using their products, the same products, for 10, 15 years in a row. You know, I remember... So there used to be a shaving company called Schick, Electric Shavers. The company was called Schick. And they made, apparently, I was told, they made, in the 1950s and 1960s, an incredible shaver. It was unbelievable. It was rugged. You know, drop it. It didn't break. It always worked. It was always dependable. It was so good that the company had to declare bankruptcy two, three years later, not because there was a problem, but the penalty of success. It was so good no one had a need to buy another shaver. So people bought it for the first few years. They dominated the market. But then that's it. No one ever bought another shaver. And it wasn't necessary because they were so good. Intel, when they made these chips and boards for the first time in the 1980s, I don't think they thought that their boards and chips would be used, and computers, would be used for the next 15 years. Because Moore's Law should apply. Right? Moore's Law should apply that the chips would double in speed so many times, you would need new boards. But people, you know, the companies just didn't think that that would be necessary. Anyway, so that's what I say is called the penalty of success. Yes. The fact is that the company, the people were, not just people, the, well, okay. The user was concerned because of the following fact. One second. The user was concerned. The user was... Let's move that. Oh, it again, sorry. Yeah. The user was concerned because of the following. The user was concerned, the public was concerned due to the following. On January 1st, 2000, the stored last two digits would be 0, 0, which is equal to 0, strictly less than 99, the digits stored from the previous year. Right? The way they constructed their motherboards, it was done so that you only stored the last two digits. Why is that a problem? Because critical monitoring systems tend to compare dates to do statistics, to say what was a moment ago and what was now. Because critical monitoring systems use the difference of dates, difference of dates and time to analyze current situations. And what's going to happen is that the year 2000, you're going to get negative values and they were very seriously afraid. The public was very serious afraid that the... So anyway, the public was very concerned that this would cause a significant problem. Right? That the negative values would crash these monitoring systems. This would result in a negative difference in time and could crash major systems. So anyway, that's a very fascinating situation. What happened in the end was... So they invested a year or two before when they realized this was happening, they invested a lot of money. Companies invested lots of money to rewrite their systems. You will not hear about it too much because in the most part, people will say nothing happened. That's not exactly true. I once did a study on this. There was an elevator in Australia that stopped working because of this. There was, sadly, but nothing happened. One airport did have a problem with airplane control. But thankfully, the pilots were able to visually guide themselves and they were able to land. No one got hurt. And there were some glitches here and there. But it was very minimal as far as I know. So you won't really hear about it. But the bottom line here is, and this is what I wanted to say, that you depend on an infinite loop. You need critical systems to run all the time. Imagine if you programmed for the defense agency and you were, you know, what you call it, you were programming radar and then the radar program stops. And then they try to figure out which program did it because maybe that's an attack. And then they realize, oh, it was a Queens College student. And then they say to them, what was that? And you, as a student, you proudly say, well, we were taught that you can only have. You can't have infinite loops. Right? So that's ridiculous. There are many programs, many critical systems that require infinite loops. Okay. Long-winded story, although it's an interesting history of Y2K, I believe. I think you should all know about it. But the question is versus the above. Turing spent a lot of time developing this notion and isolating, crystallizing the specific issue of what it means to be solvable or what he termed an effective computation. And his only, his only criteria is to whether or effectively, pardon the pun, whether, in essence, there is an infinite loop or not. Right? Line four. The only other situation that Turing was concerned with is if your code goes through an infinite loop. Very straightforward. So how could we have these systems, critical systems, that require infinite loops? And yet, according to Turing, that's the enemy of the state. You can't have an infinite loop. Right? And I'm sure your intro to programming professors told you, if you get an infinite loop, that's it. No partial credit. Zero. Very extreme. So what's the difference? So the terminology that's used to explain this difference are algorithms versus processes. Now, the word process is overused in computer science, admittedly, and in many different contexts means different things. But technically, what we're talking about, and to answer this paradox, is the notion of a process. All right. Before we go through the official terminology, I'll give you an anecdote that I think explains this difference. I think it's still on YouTube. Anyway, I don't spend too much time watching YouTube clips, except sometimes to learn material, but that's my own personal choice. Anyway, but I did find a number of years ago, a very cute YouTube clip. And the story is as follows. It's anecdotal, but I think it tells us clearly this distinction. So the story is the following. A very proper person gets up in the morning, pressed suit, puts on a tie, clean shaven, hair perfect, everything perfect, takes a leather briefcase to work, sharpens the pencils, makes sure everything is perfect, as it were, you know, and goes to work. Okay. And the person works from nine to five. The person comes to a desk. The desk is like they purposely did it like in a gym. And at the back of the gym is like a clock. And no one else is there. And the person just sits down, you know, 8.59 opens up the briefcase, sits down an apple for break time and the pencils and everything's set up. And at nine, and there's a phone on the desk. At nine o'clock sharp, the phone rings. The person now is, it's, this person's the shift. So the person answers the phone. And all you are able to observe, you don't, you do not know the other side of the conversation, but all you're able to observe is the person saying yes. So yes, yes, yes, yes. Right. And the clock in the back is spinning around. Obviously the person, they then pan the camera back to the person. And the person's getting very frustrated. Yes. Yes. You know, like louder and more aggravated and et cetera, et cetera, till five o 'clock. You know, and then the person's shift is over, puts down the phone, et cetera. So they, the curiosity is what's going on. What's the conversation? All the person answers is yes. And then the camera pans to the Verizon telephone, cell phone guy. And, and you hear the person saying, can you hear me now? And the idea is that there's gotta be someone on the other side of the conversation, who, who they hope will say yes, yes, yes. Yes. Okay. I don't know if you find that funny. I happen to find that in a very funny clip. But it tells... it precisely says the answer to this question, in essence. It precisely gives this distinction. The question one last time, and then we'll explain based on the anecdote what the answer is. The terms are going to be algorithm versus process. The question is that Turing marginalized, focused, the definition of solvable, what he terms an effective computation, to be solely based on whether or not the program will go through an infinite loop or not. If the program goes through an infinite loop, the program or the algorithm that you're developing is unsolvable. In other words, if there is no code that does not go through an infinite loop for your algorithm, then that problem is unsolvable, ineffective. And if it does go... if it doesn't go through an infinite loop, so then your code, whatever algorithm your code represents, is solvable, and that's called effective. The problem is that there are many critical programming systems that require infinite loops. Okay? That's the question. The answer is algorithm versus process. And here's the situation. Clearly, this... the story represents an infinite loop of sorts. Can you hear me now? And the answer is hopefully, if you have good service in your phone, as the example is, that you'll say yes, yes, yes, yes, yes, right? Okay. The question is, is that an algorithm... is that situation... and that... what is let me rephrase that. What is the algorithm in that situation? That's the question. What is the algorithm in that situation? The algorithm is the question, can you hear me now? Now, the answer to that question is yes, let's assume. In that situation, your algorithm has concluded. The question is the algorithm. That represents the algorithm. The phone company wants to determine, do they have good service? Do they provide good service? Can they have connectivity? Once they get the answer, yes, they're done. So what's this infinite loop? Can you hear me now? It's not that they don't know and haven't answered the question. The issue is that a split second later, they've got to do it again. As the person's walking, it's a different location, a different temperature, perhaps barometric pressure, whatever interferes with phone signals. So it's not that the algorithm requires an infinite number of steps. It's that your situation requires answering, in a finite amount of time, an infinite number of different inputs, each one representing a different instance of your situation. Can you hear me now? Yes. Move a foot over. Can you hear me now? That's not the same question as before. That's not the same application of the algorithm as before. You're just doing the same algorithm. The algorithm, though, will respond within a finite amount of time. So an algorithm only requires an infinite number of steps. But there are applications where the amount of inputs that you want to apply it on individually, independently, is an infinite stream of such inputs. Right? Right? Is the medical monitoring machine saying that everything is good? Yes. But then a split second later, it should be doing it again. But that's not linked to the conclusion that you just got a moment ago. Right? Radar. Is the sky safe? Yes. But then a split second later, you would like to know, again, the situation has changed. The algorithm is completed the second ago. The situation changed, and you want to do it again. So that is the difference, in this case, between an algorithm and a process. The algorithm is a finite number of steps, and that is an effective computation. The process is an infinite loop that's sometimes called drives, and sometimes called the driver, although that term also sometimes refers to something else. But anyway, that process drives the algorithm so that it's ready to process a steady stream of different problem instances. Each one is an independent input. Okay? Okay. In operating systems, these type of processes are called daemons, or they actually pronounce daemons. And probably because they don't want it to be spooky, so they put an A there, daemon, as opposed to daemon. All right? So that's the distinction here between, to answer the paradox, the difference between the infinite loop that's involved in code for processes, versus, you know, versus an algorithm, which is given a specific instance of input, what is the conclusion? What do you do? Okay? And that's an algorithm, and that can only be a finite amount of time. In the operating system, there are many processes that are running in the background infinitely, right? Because there's got to be a process, what's, for example, called polling the hardware. That's a term used, polling the hardware, for example. An infinite process is necessary to listen for the user, for hardware sending a signal. For example, I'm typing. So every time I type, the computer needs to listen, to acknowledge that, and pay attention to that. Right? But the system doesn't know when I will type. The system doesn't know when hardware events will occur, when the hardware will send a signal. So it has to have a process, that's an infinite process, that's paying attention to the hardware. And that's called polling, to poll, to like survey, surveying, polling the hardware. Okay? So that's an example in the operating system for these daemons. All right. In general, of course, any time you have... Are there any questions? Maybe I should do a sound check. Are we still okay? If people could... just a response. Anybody? How we doing? Yeah, we're okay. Good. Excellent. Okay. Just for those who... Okay, good. Thank you. That's fine. Appreciate it. So now, as I mentioned last time, technically an administrative point, I should. Okay, let me write that in. Only in case someone's not here, they should realize... While technically... Well, you know, let me not write it. The point, I'll just say, the point is that technically the Brightspace system only allows 60-minute meetings. But we noticed last time that, in fact, the... In fact, the system... Once you've entered the room, you could stay here. So we are able... As far as I know, we are able to stay the full 75 minutes. And we do not have to go to a separate meeting to catch up the last 15 minutes. Okay? But if somebody had to leave, the system will not allow someone to enter after 60 minutes. All right? So now we know. Okay. All right. Now, an interesting point, though, that we should mention here about what Turing. What was... What did Turing expect? What did Turing expect to happen after the system has concluded? So this is quite interesting. And this next statement had a profound influence in programming generally. And certainly on the theory. Which is that... The issue is, we keep talking about input. The question is, what about output? Let me... Let me phrase it that way. Right? We talked about input. The question is, what about output? All right. So let's say it like that. Let's have a little discussion. Because that's... It's very important to the theory, and it's quite related. Okay? We have been discussing the effects... Of... Effects, I think. I'm sorry. The effects of input... What happened to output? What happened to output? Does it matter? Why are we asking that? In other words, since Turing was only concerned with... Whether... Whether... Whether... The... The machine... Program... Halts... Halts means stops... For an effective... Computation... I.e. No infinite loops. So... If that's the only case... Did Turing... Require an output? Require an output? Or... You know... Does output... Does output... Play a role? So... Yes or no? In some sense, what we're saying is true. Okay? Mainly what... And I mentioned this to you before... Turing was not so concerned with... What your code does... Or whether it did it properly. So... He wasn't too interested... He didn't spend too much time dealing with... The notion of an output. So... He... He did work around it... And I'll explain how he did that. But he... But... But as we've mentioned many times... The computer science discipline had not yet started. So... All these people were mathematicians... Logicians... Sometimes mathematical logic... But basically mathematicians. So... Mathematicians are concerned with outputs. Right? Mathematically... You know... Don't... Don't... Don't all functions... Have outputs... The same... Manner... As inputs. Right? And... And the fact is that... As I said... The... The theoreticians who developed this were mathematicians. So... They were not gonna veer too far off from the mathematically accepted notion of a function. Right? So... What... What is... What is the role of outputs? Okay. So... One aspect... As I said a second ago... One aspect of output Turing did not care about. Outputs... So... Outputs... As a means... To... Properly... Um... Indicating a solution to a problem... Was not... Turing's concern. Okay? And the... The acronym that's... Classically described... Maybe you've heard of it... Geigo... Which stands for garbage in... Garbage out. If... Your code doesn't work... If your code has an error in it... If your code... Only works well on certain inputs and not other inputs... And your code has received an input that it's not supposed to have received... Turing says... That's... That's not my problem. Garbage in... Garbage out. If your code allows... Or your code contains... Errors within... That's not my problem. So Turing was not actually concerned... By... Whether or not... Your code... Works... In the colloquial sense. Did you solve your problem? An algorithm is a finite set of steps... To solve a problem. Yes. But for Turing... That's a self-defined... Process... A self-defined point. Meaning... Whatever code you have... That... Is solving whatever... Problem it's supposed to solve. It may not be the problem that you're trying to solve. It may not be solving... Implementing... Properly... The solution or code... To solve the problem... The algorithm... Or it may not even be a proper implementation... Of the algorithm you intended. But that's not... The problem... That's not Turing's concern. Turing says... That is a solution. It's a solution... Maybe not to your algorithm... It's a solution to some algorithm. By definition... The code is... The code is... An implementation... Of... An algorithm. It may be... An arbitrary code... And hence arbitrary algorithm... It can even be... A... A... A... A... A badly implemented... Code... With errors... In... An intent... In... An attempt... To solve... A specific... To solve... A specific problem... But to Turing... By definition... Whatever your code does... Whatever your code does... If it halts... An input... If it halts... It stops... Executing... After receiving... An input... That... Uh... Code... Has... Solved... Solvability... Solved... The problem... A problem... May not be what you wanted... But it is solving a problem... Right? That's what he... That's the term he used... Solvability... And he purposely called it that... It's not relevant what it does... It's not relevant whether it's doing what you want it to do... By definition... To this extent... And this is what I said before... To this extent... To this extent... Even if... The reason your system stops... Your system... Running the code... Stops... Due to... Exception handling or errors... That... Is... Solvable... Effective... It's solved... Whatever the code was meant to do... It did... Whatever... The code was meant to do... In a... Finite amount of time... That's a... In the... Theory... Books... That's an important point... Very important point... When we talk about the... Later... The... The... Unsolvability of the halting problem... I don't know if we'll be able to get it to... To it today... Maybe we could get to it... Next class... That's gonna be a very important point... This last point... That even if the system crashed... That's... Called solvable to Turing... Because the machine stopped... Okay... Now... But if the system crashed... There was no output... Right? So that's the extreme example of what we were asking... What's the role of output? Okay... So... So... Believe it or not... Turing... Again, by definition... Turing... Imposed... The following... Definition... Of... Input and output... In order not to have to deal with all of this... Turing... Defined... Input and output in the following way... Now... Please remember... The memory... Storage... On the Turing machine was a tape... Okay... It was a... A single dimensional tape... At the end of the semester... I will talk about... From the research literature... Variations... On that model... That people have proposed for Turing... Which will include... Two dimensional tapes... Etc... But... Ignore that now... But for our purposes... Turing... The memory storage... Was on a tape... Memory storage... For the Turing machine... Was on... An... An... An... Technically... Because that's the next word... Infinite... Length... Length... Tape... Now... As we mentioned last time... I have it in the notes somewhere here... Uh... Well... Wherever it is... Uh... Let's come out of that for a second... Um... That... The fact is... There's an irony here... In that... An effective... Computation... Can... Can... Only... Use... A... Finite... Portion... Of that tape... If you have to continuously... Write... On the tape... Then you're going through... Uh... An infinite loop... Continuously... Reading... Or writing... To that tape... Would... Be... Would... Cause... An infinite loop... Right? If you can... If you're continuously... Reading and writing on that tape... Then you have an infinite loop... Which Turing can't have... So... So therefore... In fact... You can only use... A finite portion... Of that tape... Okay... Now... What about the definition... Of the input and output? Input is defined... As whatever characters... Are on... That tape... Prior... To... Executing... Your code... And likewise... On the other side... Output... Is defined... By whatever... Characters remain... On that tape... After... The machine holds... After... The code stops... Okay? So... What's on that tape... Is called... Input... What's on that tape... Afterwards... It's called... Output... So... It's not... Turing... Did not consider... Did not think about... Peripheral... Accessories... You know... A printer... You send it to... The screen... There was no screen... There was no keyboard... Right? There was no... Printer... So... Everything dealt with... That tape... It's a... It's a... Very interesting... Notion... Um... So... The... The... Storage... Meaning... The memory... Is... The input... And... The storage... Memory... Is... The output... So... Whereas... To a mathematician... Math... How do you spell it? Mathematician... How do I spell it correctly? Whereas... To a mathematician... A function... That maps... Inputs... To outputs... Right? Um... Contains... Data values... For those inputs and outputs... To Turing... A programming function... Programming code... We'll... We'll... Call it a function... Okay? As it were... To... To Turing... A programming code... Maps... The... Configuration of memory... Before... Executing the code... To the configuration of memory... After... The... Memory storage... After... The... Uh... Code... Stops... So... There is no... Aspect of output... As we know it... It's by definition... So... In conclusion... Output... To Turing... Can only occur... If... The... Machine halts... IE... The code... Stops... It's... Defined that way... Output does not... Occur... As a verb... Output does not occur... If the machine is still busy... Because... While it's running... You don't have the ability... To read that tape... It's... It's reserved for the machine... By definition... It's only if the machine halts... Or stops... And this is even true... If... The reason... As I mentioned... But just to iterate... Because it's something... Students don't realize... In theory... But this is... Correct... This is even true... If the reason... The machine stops... Is due to... The... Machine crashing... Errors... Or exceptions... Et cetera... Mathematical errors... Or exceptions... Et cetera... Once it stops... Then... By definition... What's on... In the memory... Is the output... It'll be garbage... Garbage in... Garbage out... But that's not... Turing's concern... Okay? Now... The... Okay... So... So many details... I wanna give you... But the question is... What we could accomplish... For ten more minutes... Let me just think for a second... Alright... Let me just say... One more point... Regarding this... And then we'll... Continue on... For a few minutes... On something else... Again... Later in the day... I will... Well... Okay... The... I mentioned... We have a faculty meeting... The faculty have a meeting... At... In... In the early afternoon... But after that... I will come back... Clean this up... And I will send you a... Clean... Set of notes... That are... A little bit more... Paginated... Correctly... Paragraphs... Et cetera... And I'll... I'll read it through... Okay? Alright... But... As of now... What I use... Is our board... So... As you know... Professors... Just scribble everywhere... So this is my scribbling... Except I'm typing... Alright... Ah... Yes... The above... The above... Consideration... Or discussion... This discussion... This discussion... Okay... Not just above... This discussion... Uh... Is the... Official reason... Why... Computers... Computer theory... Must... Have... The natural numbers... Start... With... Zero... Instead... Of... One... That... Okay... Let me... Let me see if you can... Can... Give that... It's an interesting question... So... Are... We had this discussion... If you followed it along... You should come to the conclusion... That the... That the natural numbers... Again... The reason I... I... I... Call it that is... Because of the church turning thesis... That we mentioned... And I believe in the notes... I'll review it... As well... But... Just one more time... For those... Who may not have... Heard it last class... But the church turning thesis... Says that... Computation must occur... Over the natural numbers... What he called... Counting... The counting set... Which is the natural numbers... Um... And the... So the question is, when we count, shouldn't we start with one? Why do we start with zero? So I did mention to you that the programming language answer has to do with the base of an array and memory addressing. The programming answer of why start with zero is that array names are the base address and the index is the offset. So the first cell will have address, the following address, which is name of array, which is, again, storing the value of where the base of the address. Storing, in essence, the base address, where it starts, plus offset and the first cell should have offset zero. Okay, so you want the first index to be zero because you want the base, you want the addressing of your array to the name of the address, the name of the array containing the beginning address of where the array is stored. That also is the first index. That's the first cell. So you don't want to offset it and skip a cell. Anyway, but that's the official programming language reason. But I claim that based on the above, we will have a different reason why natural numbers must start, in computer science, must start with zero, not one. I'm just curious, give it a shot. Does anyone realize, perhaps, why that must be the case? Does anybody want to suggest? What's wrong here? Oh, no. Okay, good. Are we still connected? Just a quick yes if we're connected. Okay, good. Okay, you don't have to respond. All right, so Keith suggests it's because zero represents the halting state. That's interesting. That's an interesting suggestion. It's not what they did, but it is an interesting suggestion, Keith. Does anyone else... so thank you for your input. Pardon the pun. Anyone else have a suggestion? Anyone else, just quickly? Anyone else have a thought? Why it may be so? It's easier. Okay, that's true. But you're right, Jascaran. But the only thing is, sadly, Turing was not concerned whether your computation is easier. Right, that's analysis of algorithms. We just mentioned Turing had no concern with that. So interesting enough. Okay, any last one? Anyone else? No? Okay, so let me get back our sheet here. We are in January. Here we are. Okay, so the answer is the following. Let me move this over here. So it's not just... okay. The above claims that a program equivalent, which is equivalent to a mathematical function, maps the memory storage, maps the memory storage, maps the memory storage as a single unit, natural number, to another configuration of the memory storage, which plays the role of output when the program halts, the program stops. Right? That's what we just said. Well, if we put it to you that way, so there must... it must... a function a mathematical function must always be defined. As such, this Turing model must be able to map the memory storage configuration containing all zeros, which is represented or which is, you know, well, loosely encoded by the memory storage configuration. By the natural number zero. Turing does not map values to values. We, as a programmer, consider our code as mapping values to values. Turing did not map values to values. Turing mapped memory configuration to memory configuration. And since... in other words, we treat... let me just... one last one. One second. Sorry, here. We treat the entire sequence of memory storage bits as one long integer. One long natural number, not a negative integer. Okay? So the entire memory is a single number. All the zeros and ones going through all the boundaries is just a single number. Okay? And that include... has to include zero, because it has to be able to map the memory configuration of all zeros, containing all zeros. So that's the official reason that Turing had to do it. Okay, it's the end of class. So thank you all. And we will continue on Monday. I don't think... I think Monday is a normal date. So... okay? If you didn't have a chance, kindly type the word present. Thank you all. Kindly type the word present. Again, I will be sending this to you later. It's just that there is a faculty meeting. So it may get slightly delayed. But I will be working on this during the afternoon. And I will be sending it to the class via your Q mail. Thank you all. And if you didn't have a chance, just type the word present. And that's all you need to do. And again, kindly, if you could do that automatically when you enter the room. You don't have to do it twice. But in case you didn't do it yet, please type it in. See you in the next hall. They will be polygonal where you know of. And I will be somewhat loud in your room.