The lecture today is to explore the mechanism termed recursion as it is implemented by the operating system. Students in many different courses are taught recursion as an efficient means for programming and implementing different algorithms, but sometimes what is not mentioned is the efficiency costs for the operating system to support that mechanism. The fundamental structure that the operating system uses is that of a stack. So I want to take a step back and analyze what is a stack and some variance on the purpose of a stack as it relates to operating systems. And then we will go ahead and for the rest of the lecture, spend time analyzing specifics as to how recursion uses the stack technology to support the recursive mechanism. Initially, our discussion may sound a little bit philosophical. We won't spend too much time on it, but I do think there is a benefit to understanding this approach because later in the lecture we may find out that certain operating systems may seem to contradict this approach or what a stack requires. Why they do it, I don't know, but anyway. So I would like to get some very clear boundaries and understanding of what is a stack and I would like to start off with the backdrop of the philosophy that a stack is a member of. This discussion will take us back to ancient Greek philosophy, which differentiates form versus matter, and will lead us to the modern application to programming languages and more specifically data structures in what is known as the abstract data type, the ADT. And perhaps object-oriented technology in general facilitates this approach. For completion's sake, I am going to consider the Aristotelian definition, and in fact, while we are only going to need mainly two aspects, as I said, what will be called form versus matter, the fact is that Aristotle actually had four different aspects or categories when analyzing philosophically an event or object. The translation that we have in terms of what these aspects are called in the philosophical literature is called a cause, but that term doesn't mean what you probably think of, cause and effect. In fact, it's more of cause as a platform, a statement of purpose. For example, you may be pro-environmental, and you're pro the environment, and that is your cause amongst, let's say, other causes. But anyway, so it's in that context that the word cause in the philosophical community is used. We may be perhaps, we may be perhaps, we would say, categories. Anyway, the four causes in Aristotelian philosophy are the following. The material cause, which we, in shorthand, as it were, call matter, and that is out of which something is made. So it's the metal of a pot, it's, well, et cetera, et cetera, fine. Then you have something called the efficient cause, efficiency. And that is analyzing the ability for change. Very useful, but that isn't actually one of the causes that we are going to primarily focus on. But again, for completion's sake, it's included. Then you have the formal cause, or the word form, as it's also popularly known. And that is the essence of the event or object. It's what its functionality is. And then the last aspect, which Aristotle truly believed in, which is that every object has a purpose, a goal in this world. And what he, what is translated is called the final cause. Not final in that you dispose of it afterwards, but meaning it finishes defining what the object is all about. So those are the four causes. But again, we're going to mainly concentrate on form versus matter, as this is going to lead up to the notion of an abstract data type. Consider a stack. A stack is a specific type of data structure, which you have or will learn in various courses. And there are officially only a handful of operations that a stack could perform. The first is to initiate. As an abstract data type, you started with nothing, and now you need to initiate the data structure called a stack. That just sets up some pointers to enable future implementation or future usage. Okay. You need to know, is the stack empty? The purpose of the stack is to store data. And you need to know, one way or the other, is the stack empty? So there'll be a second operator, also known as a method, of course, that will test in a Boolean sense, yes or no, is the current stack empty and has no data? Or does the current stack have data? That's not just a useful piece of information, but it's going to actually be a prerequisite of one of the two items or operators that are going to follow right now. So the next operator deals with how do you get information onto the stack, and that is traditionally called PUSH, the PUSH operator. You place the item on the top of the stack. Okay. So that means that as data is coming in, you'll have what is called, in very known acronym form, either PHILO or LIFO. First in, last out. Last in, first out. So imagine you have a tube, and you're putting in precious gems into the tube. So the first precious gem you put into the tube goes to the bottom. And then the second, the third, the fourth, the fifth. Now, in order to get access to the fifth gem at the bottom of the tube, you'll have to, in reverse order, remove the fifth, the fourth, the third, the second, and then you can get back the first. So in other words, PHILO, first in, that first gem that you put into the empty tube, is now going to have to be the last one out. Or from the opposite perspective, LIFO, the last one that you put in on top, has the ability to be the first one out. So that is the, so the ability to put onto the stack, which you're interacting only from the top, you're pushing down all the other data points or data values lower in this structure. Your interaction will only be with the top of the stack. So that action of putting onto the stack is called PUSH. And removing from the stack is called POP. Now, obviously, you can't POP if the stack is empty. So the previous operator, which was Boolean, to test whether a stack is empty or not, is a prerequisite for the ability to POP. It's not a prerequisite to PUSH. You could always put it into an empty tube, empty stack, and you always could put it into a stack that has more elements from a functional point of view. But you can't POP unless there's data in there that you are actually removing. Those are the essential operators, methods, or using, again, the Greek philosophy term, would be its form. How it behaves. Now, how it behaves should be independent of how you implement. How did you construct the stack internally using the computer and using your programming. So you may want to store it in an array, for example. Let's say you have a one-dimensional array, and your index, your top, is just simply going to be the index to the first element that you have in your array. And let's say you keep sequentially adding to the array, and then you would have to keep track of, of course, how many elements are in that array. Internally, you would have to keep track of that. So some version of that could be an implementation for the usage of stack. So in some sense, that's called the material, the array. And then you impose on it your structure, your form, which we call stack. Now, the issue of contradicting the form is the following, which is that although you know that you can easily do a for loop to print all the elements of the array that have the data, but from the form perspective, the functional perspective of the stack, you should not be able to do that. What you would technically have to do is you would print the first, the top element, and then remove it. Or technically, maybe the other way around. You first remove the top element and print it, then remove the next element. Again, remove, in this case, actually means pop. You pop the top of the stack and print it. Pop the next element, which is now the next top. It's now the top, and you print it. The third element is now the top you print it, fourth element top. Fifth element now becomes the top you print it. In other words, you pop it and print it. And then, as we said, if there are five elements, you won't have any elements left. But there's a temptation, since you say, come on now! We all know that it's an array, it's easy just to iterate through. Why do I need to go through that formality? So there's a danger in trying to jump the definitions and not be consistent with the intent of the structure. And again, later on in the discussion of the lecture today, I'll bring up a very serious case that some operating systems seem to be lax on when perhaps they shouldn't be. Another popular implementation, another matter to implement the form of stack, is a linked list. Anyway, so again, you either will study or will study aspects of linked lists, but it's another with pointers and objects, you can also implement a stack. The same issue at hand in terms of temptation, if you knew that, that you could access all the elements in the stack whenever you want. But according to the stack as form versus matter, you should not be able to do that. You should only interact with the official operators, methods that are provided to you by a stack. So in essence, that is what's encapsulating this situation, is what's called an abstract data type. If you were to follow this approach and stick only to, adhere only to the form, which is the operators, the methods provided by the stack. Again, for us, is initiate one time to start off the constructor for the stack. And then you have... you could test whether the stack is empty or not, and then you could push data onto the stack, and you could pop one at a time pieces at the top of the stack and access the data. But if you accept that, and adhere to that, so then you have an abstract data type. Because then the implementation can actually be encapsulated or even hidden, some private class internally. And if it's compiled, nobody will even know what the internal data structure is. And for security purposes, that could potentially prevent the typical user or programmer from accessing data values, values that are below the top of the stack. Again, for completion's sake, if you read up in programming languages, the notion of an abstract data type, they will add a third aspect perspective for abstract data types called a signature. And a signature just simply here means the data types and names of variables that are used in the implementation. It's called the signature of the abstract data type. That's not something we need to concentrate on for our discussion today. A number of variants of stack exist in the algorithm literature, but those variants tend to, as I say, go against the notion of abstract data type and go against the form of what the stack was intended to do. For example, some authors allow other operators and methods to work on and provide data from the stack. You can swap, which is, according to some, we wouldn't allow that, but some would allow swapping, that the first and second elements should reverse their positions. Another version would allow rotation, where the top of the stack now becomes... you move it to the bottom of the stack magically, as it were, and now the second element becomes the top of the stack. Clearly those do not... are not consistent right away with the official approach of stacks as an abstract data type. Another version of the stack, but at least this is not called a stack, is called a DEC, D-E-Q-U-E, which is that imagine you had access to both ends of the stack, so that you could push and pop from both ends of the stack. In essence, this version could simultaneously implement a stack, and at whim could also implement a different approach, which is FIFO, first in, first out, called the queue. But again, it's not consistent officially with the abstract data type that we know as a stack. Lastly, if one reads advanced hardware and advanced operating system books, they are going to mention different stacks, quote-unquote, involved in the execution by the CPU and other aspects of the processing system of a computer. Some, you know, some of them are actually stacks, some of them are just data structures that collect data at large, so sometimes the word stack is used generically. But for the purposes of today's lecture, we will be discussing and introducing what's called the activation stack, which is involved in setting up areas of memory to support the recursive process and recursive programming when you, as a programmer, use the recursive technique. And that is what we are going to begin with right now. Okay, let's get started. So while I'm setting up the board notes, I will talk as well. So the self-contained topic which is on the syllabus and always find... I consider it an important topic in a practical sense. The topic for today is recursion. So you're all familiar with recursion from data structures and other courses, of course. But I want to get some fundamental understanding and terminology that's involved in recursion. Okay, so I don't know when you first learnt recursion if your teacher told you a sort of famous... It used to be famous. In other words, it used to be very popular. Sort of a geeky joke. But anyway, what's recursion? So a computer science definition of recursion was supposed to be something like this. Well, okay. Let me back up. The colloquial definition of recursion is a function that calls itself. Right? I think we've all heard that. Let's start simple. A function that calls itself is called recursion. Alright? Now, so that's the standard, as I said, colloquial. So a recursion is a function that calls itself. Right? So that's the colloquial sense. The geeky joke, what they used to say is the following. What is recursion? Recursion is the following. So a dictionary... Okay, so let's write it this way. The colloquial definition... Why colloquial and not scientific we'll discuss by the end of today. And then the dictionary definition. Again, this is a little bit of a joke. But anyway, here it goes. Okay. See recursion until you understand. Okay. So, in other words, what's the definition of recursion? Go back and see recursion. And keep on doing that until you understand. Until you understand. I love these bad jokes. I just love them. Anyway, so the point of until you understand is because you always need a base case. Otherwise, you have an infinite loop. Right? Right? So, until you understand... But okay. Anyway, let's do it. Until you understand is the base case. Otherwise, you have an infinite loop. Right? But, despite the fact whether you like the cute joke or not, it's actually very meaningful. This is not a bad... It really encapsulates the key ingredients. Right? You have a loop. And you get out of the loop when you hit... Well, we say when you hit the base case. Well, we're going to discuss whether that's really what's going on. And that's why I want to get a deeper understanding. And the relevance to this is from a very practical point of view. Which is that recursion is sometimes a very easy way to program things. What software engineers don't keep in mind is the cost to the product. In other words, in terms of practical. It won't necessarily affect complexity analysis of algorithms. But if you're profiling in terms of time and memory resources, it does tax the system somewhat. And you need... you should be aware of that. Because that could affect performance. Alright, anyway. So that's how we're going to start. In terms of the definition of recursion, we're going to challenge it itself. So I'm going to leave a function that... and I'm going to put a parenthesis here, okay, with a star. So we're not going to... we'll eventually get to the star. But let's see what that star is referring to, okay? So let me put a star here to be discussed below. Alright, so we'll discuss that below. Okay, so let's get a little understanding what happens to the operating system when you have recursion. Alright, so in the operating system, you have an area of memory called the heap. I always mention that in computer science, there's a parenthetical statement. But I think it's useful. In computer science, there is... I think now I've counted four. It used to be three, and then I remember a fourth, although I don't remember all of them right now. But anyway, in computer science, there are four different applications to the term heap. None of which have anything to do with each other. Uh, right. So the three, I remember, there's a fourth I found in the past few months, and it's escaping me. Anyway, so what are the four? Okay, let's put a colon. So it's a continuation of our side comment. So for example, well, there's a heap here. Okay, so let's say the heap here. The heap here is a pool of memory maintained by the operating system. Uh, it's. .. Specifically, it's memory manager. Sorry. Uh, memory manager. Right? Uh, to provide memory, uh, dynamic memory, let's call it. That's what it is. Dynamic memory, um, for, uh, or two, uh, processes during runtime execution. Okay? Okay? So that is, uh, the heap that we are referring to. That's our definition of... That's the... That's our usage of heap. But to quickly go through what other places heap... So if you remember, uh, in, um, for example, in Heapsort. Uh, so in Heapsort, uh, the heap is a, um, a tree. It doesn't have to be binary, but in Heapsort, well, in Heapsort, it is binary. But in general, the heap does not have to be a tree, uh, binary. Anyway, a binary tree, uh, whereby the value of the root is consistently either greater or less than all its descendants. It's consistently either, uh, well, it's consistently greater, in parentheses, uh, less, less than, all of its descendants. Right? So the order, in terms of the siblings, don't matter, but the parent has to be greater in value, let's say. In which case, it's a max heap, or if it's less than, it's a min heap. All of its, uh, descendants, not, okay, down. It's descendants, um, no, I mean, uh, up and, okay, up and down. Uh, all of its descendants, okay, so that's the heap. And Heapsort, for example, uh, there's Heaps algorithm. That's named after a person. Heaps algorithm, uh, for, okay, so let's just write it out. Uh, for non-recursive generation of permutations, I believe. Non-recursive generation of permutations. And then there's one more. We also have Heaps law in information retrieval, uh, which relates the number of words that one learns, uh, by reading a document or a book, uh, and the law states that you initially learn a lot of new words closer to the beginning, and then there's a diminishing return as to how many new words, uh, you pick up and learn, uh, as you go further and further into the document or book. Heap, uh, himself, the fellow that provided the non-recursive permutation algorithm, uh, did have, uh, another, uh, famous, uh, algorithm, and that had to do with what's called contouring, giving, given a set of points, uh, uh, and dimensions, or more typically, he dealt with two and three dimensions. Uh, the question is to draw lines around the surface, uh, enveloping the points. Uh, so that was also Heap's algorithm. Uh, for completion's sake, um, there are those who consider the following a heap, although it's not so clear, really, that it should be called. It's sort of an amalgam of, uh, the heap as memory and the heap as a tree, as in Heap sort. Um, and that is, uh, what's called the buddy system, um, in, uh, operating systems. It's a type of memory manager that is in the form of a tree, uh, such that, uh, the node represents or, or encompasses a large area of memory, and its children are a partition, breaking up that big piece into smaller pieces. So, in some sense, it's a, uh, mathematical heap in that the size of the memory at the parent node, uh, is greater or equal to, uh, any of its descendants. Uh, but it's not like it's storing numbers per se, and it's in some sense like the, uh, heap of memory, uh, because it is a, a data structure of sorts, uh, to keep track of, uh, memory and sometimes is used by the operating system. Uh, so some, uh, blogs and some authors like to call that a heap, although it's, it doesn't really have the full elements of both. Anyway, as you can see, the first one's memory, the second one's a data structure, and the third one is an algorithm. Clearly nothing to do with each other. Anyway, that's the fact. So we're number one. This is ours. Uh, uh, okay, for today. I know it's, uh, for today. All right, but the others exist as well. All right, so you have this area of memory called the heap. Um, and in it, uh, is going to be, uh, uh, sections of memory called the block, a block of memory. So in heap, there are blocks, that's the actual term, original term. Maybe they call it differently, differently now. There are blocks of memory assigned, uh, each, uh, each assigned to a different process. All right, so let's see if we could get some sort of, uh, picture going here. Um, I need four. All right, so let's do it like this. Uh, we'll move it over in a second. All right, so, uh, format cells. I'm going to type some data in here, and it has four pieces to it. So, uh, at least four, but let's assume... Oh, I should have done it a little bit better. Uh, well, it's not so bad. But anyway, let's, uh, we're going to need this. You know what, let's do the whole thing already. All right, so we're going to have that. Okay, let's move it over. All right, so we need, let's say in our example I'm going to do now, I'm going to need four. All right, let's do this. And so that's your blocks of memory. And now this is your heap. Oh, good. Excellent. Excellent. Really beautiful picture. No, it's not, but... Okay. This is our heap. All right, and this is block one of memory. This is block two of memory. And there's probably more than four blocks, but in our example below, we're going to have... Oh, great. Uh, four blocks. Oops. Not block five. Block four. All right. And why there are four in it, I'll show you in a second. Fine. Oh, nice. It worked out nicely. All right. You know what? I think we can make another space here. Hmm. That didn't work. Let's try it this way. Hmm. Okay. Oh, that's cute. That's not what I meant to do. All right. I'll clean it up later. Anyway, let's move on. All right. So now, so you have this heap of area. Let's imagine there are different blocks of memory. So what is going to be in a block of memory? So, uh, well, it may depend on the process, but we're, I'm going to concentrate now on the parts of the process, you know, in terms of your methods. So you have a method that you're calling and the operating system has to assign it memory for certain purposes. Okay. The question is what would go in it. So, so most methods typically, but even if it doesn't have it, you have to, you want to standardize the blocks. So for, with no particular order, but, uh, there's a parameter list, right? There's a parameter list. So, um, the, you know, it's when you call a method, there are parameters where, and that's like a declaration because when you remember when you call a parameter list, right? So it has int in the parameters, right? So what's that doing there? It's just to help you, uh, know, uh, like documentary, documentary purposes to tell you, oh, the quantity is a variable. That's an int. No, that's not the reason. It's because it's an actual declaration in that parentheses. If that's a declaration, it has to be stored in memory. So where is that memory? It's in this block of memory. That's the, um, the memory manager gives to the method, uh, to store that information. So the parameter list is a declarative list. It declares variables of the given types that the user programmed and has to store it somewhere. So there has to be a area of memory. Oh no, you're kidding me. Okay, good. Thank you. So, um, okay. Okay. So write the parameter list. Okay. So that has to be the case. Good. Now, uh, again, no particular order. Now, years ago, the next item was a years and years ago. In other words, if you look at the history of computer science, uh, the development of operating, sorry, of operating systems, uh, this next item was a big item years ago. It's still an item, but it's probably lessened, uh, in terms of how much memory it needs. Uh, what I'm referring to is the code itself. Where's the code stored? So years ago, the code for the method was, was stored in the block. It was self-contained. Why'd they do that? So for two reasons. One, first of all, uh, uh, organizationally, that makes sense, right? Each block is its own briefcase, its own filing cabinet dedicated to that process. So why not keep everything about that process together? And, uh, they found that block of memory that they gave was sufficient for all that. Why? Because number two, years ago, the methods weren't that big. Alright, anyway, not that our methods are so, so gigantic. Our classes may be, but our methods aren't so, so gigantic. But, uh, the bottom line is, they don't always keep the code in the block today. They, they save memory by putting the code elsewhere, and you have a pointer to the block, right? As we always say in C-based languages, the name of the method is a, a function, is a pointer to the function. What do you mean pointer to the function? It has the address of where the code is, and hence it's a pointer to the function. Anyway, so ignore it, well, not ignoring it, that's what they, that's true. That is what happens. But, um, in C languages, base languages, you can, you can use that, right? Ampersand the function, uh, et cetera. You can play around with that, uh, some interesting code you could do, you, uh, effects you could get out of it. Anyway, but our purposes is that the code needs some sort of memory, whether it's entirely stored here or just the pointer to the code. Uh, so I'll leave that up to the operating system and the history of that, uh, event. But, uh, but you do need a place to put the code. Okay. What else do we need? So we need, uh, two more. So then there are also local variables, right? The local variables have to be stored somewhere. Where they stored. So, but by the nature of it being local is that, uh, unless they're static, in which case if they're static, um, then it may be, let's put an asterisk here, uh, if static, uh, then the permanence, permanence, uh, gives it a global status of sorts, right? Because it's, it remains there or either remains there or remains wherever. So you may want to, you may. I don't know. Again, it's an operating system choice, but the point is it's always there if it's static, but assuming it's not static. So as soon as you're done with your method, you're done with the, uh, variable in which case, um, in which case, uh, you might as well store it in this block. And when you're done with the method, you could, you could get rid of the block and then you're You know, you're back to where you began, right? So the local variables pretty much are going to be stored in that block. And finally, you need, well, you know, workspace for local calculations, right? Workspace. Okay, I'll put a little asterisk here. A note for local calculations, right? Sometimes called scrap space. So let's double that one. And let's single that one. And let's put a star here for notes. All right, so there are typically four. Maybe there are others, but these are the major categories of information that are going to be stored in the block of memory when you call a process, when you call a method, call a function, whatever you're calling. Okay? So that is your block. So when you call a method, the operating system assigns a block of memory, whatever that size is. It probably will depend on the operating system, the operating system, also the history of it. When the term block was introduced, it was actually set up to be a half a megabyte. They picked that number. Why? I don't know. I'm sure historically they had some study. But that's what they did. Sometimes... Ah. Historically, it also had another name. Let's write that down. In Heap, there are blocks of memory. Okay. Parentheses. Historically, these were also called pages of memory. Historically called... also called pages of memory. The term pages is a more generic term today in operating systems. But years ago, they used to call this a block of memory. Sometimes they used to call this a page of memory. Now, it could be that... it could be what I'm saying, what I just said now, is the distinction. It could be that the page of memory was that size. It was... you know, the page was a generic block. Whereas here, it was a specific block. But okay. I don't want to split hairs. But anyway, in the operating system books, and some of the program languages books, you may see blocks. You may see pages. Anyway, that's the term. In other words, a block of memory... Okay, whatever. I don't want to get into it. So, you know, who used what terms. But that's the basic idea. Okay, now. All right. So, let's get started. So, what I'd like to do with you to... in order to analyze... Oh, yeah. Yeah. One last point, which is quite important. Which is that... what happens when the operating system doesn't have any more memory? Right? Because that's... it's a finite amount of memory. What happens when the OS, the operating system, has no more allotted memory to give? Right? There may be more memory in the operating system. But the memory manager doesn't have access to all the memory. The memory manager has access to this. You know, it's the dynamic memory manager. To the heap memory. Has no more allotted memory in the heap. Let's say it that way. To allocate. To give. Or to assign. Fine. To a method, function, procedure, etc. Procedure, process. Whatever. Wherever it's assigning it to. Okay, so the key word here, of course, is garbage collection. That's where garbage collection kicks in. Right? So, garbage collection means that there's an algorithm, there are a number of famous algorithms, that the operating system goes block by block. Right? And investigates who was this assigned to, and whether that assigned function is still active or not. Right? So, any block assigned to a function that is no longer... Okay, let's do a method. Let's call it a method. To a method that is no longer active can be retrieved for reassignment. You don't need it anymore. That is no longer active can be reallocated to a different method. Now, understand, this is a slight inherent security risk. Because, doesn't the program just crash? The answer is no. But you're right anyway. So, let me explain. The fact is that the operating system, unfortunately... Or, well, the way the operating system is set up, which is very similar to human beings, which is that people... I shouldn't say. There are other people who are, by nature, very organized. But without doing this statistic, for people that are not organized, you finally organize when you have no more room left. Correct? So, when you reorganize or decide to throw out things, it's because you think you have no more room. When, in fact, what may have been is that you did have room. It's just that things are so disorganized that they're occupying more space than they need to occupy. Right? So, that's a similar thing that's happening here. According to the inventory number, every space has been occupied. The point is, though, that not all those occupied spaces are needed anymore. Right? So, now you can clean it out. So, you... Garbage... So, again, at the point when garbage collection kicks in, the operating system thinks, the memory manager thinks, that it has no more memory. What it realizes is that a lot of garbage, a lot of unused... unnecessary and un... un... un... what would I say? Not... what would be a good word? Not affiliated memory. You know, it's memory that is not connected to any more processes. So, you can recoup that memory. You can recoup that space. So, if after garbage collection, there is no more memory, then, in fact, it'll crash. But the point that the operating system calls garbage collection, it doesn't crash. Long-winded, but it was an interesting comment. Anyway, it doesn't... Okay, good. All right. Very interesting comment. So, right. So, that's the need and point of garbage collection. Now, in... I should mention, I'll cross-reference. It's not our discussion, but cross-reference smart pointers in C++. So, there was an attempt in the C++ language to minimize the need of garbage collection where the pointer keeps track of who is referencing it, right? The memory, it's the method itself. In other words, the variable... As it were, the variable. But it's not really the variable. But let's say it like that. It's a little bit, obviously, more than just the variable. But the variable itself, the pointer... All right, let's say it like that. The pointer variable itself keeps track of who is referencing it. So, at the point that nobody's referencing it, it's zero. So, then it will release itself. So, it's a fast... That's called a smart pointer in C++. Anyway, very interesting. Obviously, it means that the variables is really an object. Because, obviously, how you're keeping track of how many references, etc. All right. Interesting point. All right. So, that's the point of garbage collection. Good. So, let's get started in discussing scenarios of method calls. All right? So, I'm going to have three scenarios for you. And we're going to discuss scenario number one, scenario number two, scenario number three. We're going to give it eventually names. But we're going to have three scenarios. So, I'm going to ask you... I'm going to expand on this. Maybe not obvious, but I am. And I'm going to ask you for each one of these, whether this is relevant to recursion or not. Okay? That's the goal for right now. All right. So, scenario number one is going to be the most straightforward one, which is we're going to have an arbitrary... I'm going to put in parentheses. That's not... Well, arbitrary generic method call. All right? So, I'm going to be moving things around so that the drawings are a little bit better. But anyway, so what's our story? So, we have what's called an activation stack. The data structure is an activation stack. And it's maintained by the operating system, the operating system memory manager, by the OS memory manager. All right? And what it does is it keeps track of who's calling whom. Which method is calling which method. Okay? Initially, it's main. So, the operating system calls main. Main calls a method. Method calls a method, etc. Which method is calling which method. Okay. So, now each time it does that, each time a method calls a method, A method, a block of memory is reserved in the heap. Sorry. And placed on the stack. Alright, so it doesn't have to be... we're going to draw it as if it's placed on the stack. Probably it's a pointer to the memory. But let's imagine for drawing purposes that it is. Okay, so we're going to increase this as we go because we may need more drawing. We'll see. You know, it's size-wise. How tall? Whatever. Alright, so here we are. So this is our activation stack. Yeah, good. Okay. So activation stack. Alright, so initially we're going to call main, right? So the way we had it, I'm not going to draw the content, but we know that the... should I draw it with different... well, no, maybe not. Alright, I'll decide later. So we have our... you know... oh, I see what to do. Okay, let's make it a little wider here. Yeah, that's better. Yeah. Good. Alright, so we have now... the reason I have four... again, it's split, but the idea of four is because I said there are four main, and maybe there's some extra stuff, but the main areas of memory, each block has a parameter list for the method, argc, argv, in the case of main codes, right? Or pointed to the code, perhaps. Ah. Okay, so let's add another comment. Whether the actual code or pointer to code. Alright, so now we've got to augment. If. Okay, good. Alright, so let's make that a little neater. So that's our four, and I'm not going to specify it now, but again, so we have our parameter list, our code, local variables, and the scrap space, as it were. Okay, very good. Now, alright, so what happens? This is our... let's say this is our main method. Alright, so we'll give it our name. This is our main method. So main has a block associated. Associated with it. Alright, so now what happens? So now what happens is that we're going to get another call, and main calls f. F of x. Or whatever. Alright, for simplicity, it calls f of x. Alright, so again, okay, so main calls, you know, dot, dot, dot, dot, dot, dot, dot, dot, right? It's in the middle of its call, and then it's something, you know, y equals f of x. Alright? Oh, well. Alright, now I'm using it two ways. Alright. The point is, in the code somewhere along the way, dot, dot, dot, dot, y equals f of x. Not that it's such a great piece of code, because as I pointed out to you in software engineering and documentation, you probably should have better names. But for simplicity of the pedagogics, I'm not going to give it, I'm just calling it generically, y equals f of x. When you actually do your code, please use meaningful names. Alright. Alright. So now, so y calls f of x, and let's make our story a little more exciting. So now f calls another function g. Okay. And now we need some more room. Hmm. Let me think. Let's see if I can pull this off. Insert. Let's see if that works. Insert, copy, cells. Oh, not bad. Okay. Not bad. Good. Alright. So now y calls g. So this is g of x. And to complete our story, g then calls h of x. Alright. Let's see if this is the way to do it. Ah, nice. Alright. So y then... So main called f. f calls g. g calls h. Alright. Alright. So let's take a look. Right. Y calls main... The activation stack is here. Right. And initially you called main with the argc, argv. That's your parameters. And along the way, main called f. f then is pushed onto the stack. Right. Because it's now the active. The top of the stack is the... Top of the stack is the current active method. Top of stack is current active method. All right. So let's get back to it. So main... We call main. So at that point main is active. But then main is suspended because we don't know what f of x is equal to on line 55. So then we push f of x onto the stack. And f of x is now the active method. So again, the parameterless code, local variables to store, scrap space, workspace to calculate things in the middle of the method. Right. And then along the way, along the code, f calls g. So now g is... Memory is assigned to it, a block of memory, pushed onto the stack. And now f and main are no longer active. And g is active. Right. Because now at this point it's the top of the stack. Along the way, again, it has its parameterless, its code, local variables, and workspace. Good. Along the way of the code, g calls h. So now we have one more time. So the memory manager finds a block of memory, a page of memory, a block of memory for h. Pushes onto the stack to the top. And now h is the current active method. g, f, and main are temporarily suspended. So h is calculating in order to hopefully return back to g, g hopefully to return to f, f to hopefully return to main. And a point that I'll mention again later, main hopes to return to the operating system. Because if you look at the history of c, the c language and c-based languages, there was an argument. It really is interesting. I mean, if one were to document this, to figure out who was the influencer. But there really is, throughout history, a debate whether main should be an int or main should be a void. Right? And that actually was back and forth. And different books in different time periods would declare it one way versus declaring it the other way. The bottom line is that main does return a value. It just doesn't return a value to you. It returns a value to the operating system. So from your perspective, it's a void. From the operating system's perspective, it's an int. Okay? Because it returns a value back to the operating system called a flag. Not like the flags at the command line, but this is an actual flag. An actual... a semaphore. Semaphore is another name for flag. To tell the operating system, did things go okay? And minus one meant no problem. Negative no problem. So that actually is the case. It's sent back. So to the operating system, main is an int. Maybe we should point that out. Okay. To user, main is a void. It doesn't return... no return value. To operating system, main is an int. Returns a semaphore flag. Commenting on whether the program executed correctly or safely. I don't know. Executed until completion without problem. And that value happens to be minus one. At least traditionally. All right. So that's an interesting point. But the point is that there are argc, argv values. Okay. Fine. It's an interesting comment that you should be aware of. But it won't affect us. But main does actually return something. Technically. All right. So here we have... So now h of x is the final one called. And along the way... Dot, dot, dot, dot, dot, dot. There is no more method calls. So what happens? So at that point, what happens is that it goes to completion. Right? So let's see if I can do this nicely. How can I do this? Hmm. Not bad. For what I need. Oops. Yeah. I forgot. Yeah. I have to... Okay. One second. That's an Excel thing. Don't worry. All right. So let's fold that. That's nice. And now let's put... We'll need this back here. And then finally... This may not be too bad. Let's see if I can pull this off. Format Cells. Let's put a blank... Here. No, we want a stronger one. How about that one? Good. Oh. No, that's not good. I was like... Outsmarted myself. All right. You know what? Let's do it. Oh, I see. Okay. Maybe I could do it this way. Not bad. Okay. Let's try this. Maybe that's better. All right. Well, not great, but not bad. All right. And then maybe instead of this, we'll put this. Ah, excellent. Ta-da. Right? So upon H's completion, the block is popped. And output value returned to G of X. Anyway, so now it's upon G. So H has been completed, so now G is completed. Okay. Well, I think you're getting the idea. Anyway, I'll have to clean this up later. So then G finally goes to completion. And then after G going to completion, it goes to F. And then finally, okay. So H goes to completion. It returns the value to G. F goes to completion. And it returns the value to main. And then finally, and then finally, main goes, actually, as I said, returns a value, but not to the, what you call it, it returns the value to the operating system. So maybe we should do it this way. It just, it goes only one way at that point. to the OS, which is the semifore that I just mentioned. Anyway, I'll clean it up a little bit later. But that's the idea. So the point, again, is that H is, okay, so H completes and returns to G. G completes, returns to F. F returns and, F completes and returns to main. Beautiful. And then main is popped as well, because it returns a value to the operating system. And that's what happens. Okay. Oops, sorry. Upon F's completion. All right, so our generic method call, scenario number one. The point is, does scenario number one specifically deal with recursion? Clearly not, right? All right, any questions? Any questions on this scenario? Anybody have any questions? Let's first address that, and then otherwise we'll continue. Well, we'll save time, because I'm going to take the same picture three times and modify it slightly. So we'll save time. So, and I'll clean it up later, as I said. Anyway, any questions on this general situation? This is what happens. So you have to realize that method calls have an underlying cost, both in terms of memory from the memory manager, as well as time. It takes time to build this thing. Okay. So let's now get to scenario... Well, okay. I'm going to skip a scenario. So let's copy this. Excellent. All right. So let's now jump to scenario three. And there's a reason I'd rather do it this way. And we're going to call this a recursive method call. Okay. So let's jump right away to the recursive method call. Scenario number three. To see what's going to happen. I could leave this comment here. Why not? Okay. So it repeats itself. No big deal. But obviously, the story is a little bit different. It's not main calling F, F calling G, G calling H. The story is the operating system calls main. That's true. And then Y in main calls F. That's true. But F doesn't call G. Instead, F calls F. And then another second instance of F. So that's why each block... In the block situation here, that's why I had four blocks. Right? So one's going to be main. One's going to be F. One's going to be, well, either G or F, depending on our story. And one of them is going to be H or F, depending on our story. Right? Actually, it's not a bad idea. In our story, this, let's say, is main. Yeah, why not? Okay. In our story, this, let's say, is F. Oops. Wrong place. In our story, that is F main. And then this is F. Okay? And then here in our story, this will be either F or G. Right? That will depend on which scenario. And finally, this is H. Well, H or... Okay, F or H. Or H or F. All right, so let's say either G or... I was doing it alphabetically. All right. Either F or H. Depending on our story. All right? So that's our four blocks. So now, here, as we said, in the case of recursion, main calls F. F calls F. F calls F again. And then the final block of F. So we're going to have three Fs and a main. Right? But each instance is separate. Each instance, right? Each block. Each time. Let's imagine this is F. And this is F. And this is F. Maybe I should... Later. Maybe I'll copy it over locally. It'll look a little better. Anyway, I'll edit this later. So main calls F. F calls F. F calls F. F calls F. All the three Fs. Block 2, 3, and 4. Are their own memory. Nothing to do with each other. If you change a local variable in block 2, it's not changing the value in block 3, even though they're the same variable. And if you change a local variable in block 3, it won't affect 2 and 4, even if it's the same name. It won't... not even block 1. Right? But you don't expect that. But okay. Could be. By accident, use the same name all over the place. Right? It won't affect it, because they're all local copies. And in the activation stack, they actually don't even know about each other. That's why they could simultaneously exist. F, third instance of F, second instance of F, first instance of F. F, they have no idea about each other, especially since according to the rule of the stack, the activation stack, right, or stack in general, that only the top. .. only the top method is active. So the other two instances are in dormancy, sleeping. Right? Latence. Background. I mean, well, not active anyway. All right. Now... one second. Yes. So this is what we call clearly recursion, right? And what happens? Eventually, that last F, you know, whatever the values are, triggers what we call colloquially, right? It triggers a base case. Right? And no more method calls. Right? That's what happened. So upon... okay, so here it's F returns... So after F's first completion, after third F's completion, it returns it to second F. Then upon... Okay, and then upon second F's completion, it returns it to the first F. And upon the first F's completion, it returns it to main. Right? Okay. All right, and that's what's called recursion. Maybe I should put the... because the key here is the... Oh, that's interesting. The key here is the base case part. That's the term. Right? Triggers a base case. Active... it triggers. You know, it satisfies. A base case and no more method calls. And that's why it's returning, because there's nothing else being called. Garbage collection doesn't happen during this process, even though it should. I know. I know it'd be, from an efficiency point of view, you'd be right. But in fact, that's not what they do. They're much more lazy, much more sloppy. The reason is, they're trying to save time. Because, in other words, being sloppy is not a good thing in the long run. Okay? But, if you have tons of memory, which typically modern computers have, if you're sloppy, sloppy, sloppy, the user saves time because the operating system did not keep track of its memory. Keeping track of memory has an extra cost that the user doesn't understand why their program is running slower than someone else's, right? So, freeing up time is... Time is more typically important than space. In life, probably true as well, but certainly in an operating system. Or certainly, if I'm a programmer or user's point of view, time is more important than space. So, freeing up time by not keeping track of the memory may be a slightly dangerous policy. But, if you have enough memory, you generally do not have to worry about it. So, what's going to happen is, the user's probably not going to see any of this. And then, the user finishes its program. And then, either of the operating system, if it has free time, it may kick in. Right? No users right now. So, maybe it'll kick in garbage collection. Why not? But, typically, it's just going to wait until it's really, really, absolutely necessary. While this is happening, one thing probably is correct, which is, it's not. .. but it's not called garbage collection when it does this, even though it does similar things. When the stack is deactivated, the returning of memory to the heap isn't called garbage collection. It's called deallocation. So, although it accomplishes the same thing, it's local, it's not global. Garbage collection is on the whole heap. Whereas, returning a specific piece of memory is not called garbage collection. It's, in spirit, the same. But, it's being local, it's called deallocation. And as soon as this reloads, I'll type that in. Interesting comment just now. Are we following? Even if you use dynamic programming. Okay. So, those are, you know... All right. We are not... And it's not a criticism. It's just, I have to explain why I'm not going to give a full response. We are not an analysis of algorithms course. Dynamic programming is an important topic in analysis of algorithms. Let me say two points, though. Caveat emptor. Buyer beware. Because I've found even very high-level programmers don't truly get what dynamic programming is accomplishing or not accomplishing. I once had a student, came back to me a few years later after graduating, getting a job. And he said to me, my boss told me I must use dynamic programming. What do you think? And I said to him straight out, your boss doesn't understand dynamic programming. But don't tell him that. And do it anyway. Because the boss said... I mean, tell him from an ethics point of view. You could tell him that, you know, isn't the following method a more efficient method? Do that. But if the boss says, I'm the boss, do dynamic programming, than do dynamic programming. So there are many people out there who just don't get it. They like the sophisticated look that dynamic programming has. And certainly the way it's presented in the books, it's that way. But there's a lot behind the scenes that even, as I say, high-level programmers just don't get. Which causes an inefficiency that they very often don't realize. To address, though, your local concern about the copies to the blocks... Yes, yes. So it's an interesting point. But now, the second point, that's my global point. My local point, when you said f of x copies too many blocks. So that's a question, because the way dynamic programming is actually implemented, it actually may be non-recursive. I know the definition in the book is recursive. But the actual implementation of that implementation isn't always. So believe it or not, and that's an irony, but depending how you program your dynamic programming, you may actually not even have an activation stack. I know you think you do. Because the way the mathematics is defined recursively. But if you look, and that's one of the reasons I say sophisticated programmers don't always get it. The way that it's actually implemented very often is not actually recursively. It stores it in an array. And that array has a specific size. And at the get-go, either the operating system gave you that size or it can't. In which case, along the way, you won't need to bump into garbage collection or block recovery, etc. It's a really fascinating topic. It's an interesting comment. But a lot more to discuss about that beyond us. Hopefully, you get the right analysis of algorithms teacher who can tell you the truth about dynamic programming. Good, good, good. Alright, so now... So this clearly is recursion. Alright, so now comes that scenario 2 that I purposely skipped. So let's copy this because this is going to be our discussion, but a variation. Alright, so now comes scenario 2. And as to what we're going to call it, I am going to put this on hold. Alright, so here's scenario 2 which I'm now going to edit with you, okay? So we had scenario 1 which is all-encompassing, right? Main calls F calls G calls H. Which is seemingly arbitrary calls. 3 on the other extreme clearly is recursion. That's what we're familiar with. Now comes this in-between situation. Okay. So it starts out the same. Main calls F. Push onto the stack. And now F calls G. Not F. So that is not recursion. F calls G. Alright? So, so far, do we have recursion? No, we don't. Main calls F. F calls G. The problem is G now calls F. And that, that final method is F again. At that point, it triggers, well, I don't know what to call it, but there's no more method calls. Can you call it a base case? I don't know. So we'll leave that in abeyance. But this part's the same. There's no more method calls. Right? I'm going to leave that. So as to whether you can call that a base or not, I don't know. Because here's the problem. Operating system calling main is not recursion. Main calling F is not recursion. F calling G is not recursion. G calling F is not recursion. So is this picture recursion? Let me hear your comments. Is this picture recursion? Main calls F F calls G G calls F Is that called recursion? We have varied opinions. We have a yes and no and partially. No, which is fine. That's exactly great. So let me give you another perspective which makes the question even stronger. And then let's try to resolve what's going on. The problem I have with this discussion, which is why it's an issue, is the first right here. The first line of our notes. Let's now look at the first line of our notes which I now bolded. And I said it's colloquial. Now you understand this comment here. Now we're up to to be discussed below. That is my where I'm up to. We're now up to this comment that I was not telling you the details of. Recursion is defined according to colloquially defined as dictionary obviously is a joke. Okay, so let me put a smiley here because it's not you know but this is the colloquial definition. It's a little bit more than colloquial. It's somewhat scientific. But anyway, colloquial definition is a method that calls itself. Alright, now so a function that calls itself that does not apply here as it was just now elaborating on. Right? Main calling OS the operating system calling main is not calling itself. Main calling F not itself. F calling G not itself. G calling F is not itself. So isn't the whole a sum of its parts? Right? Isn't a whole the sum of its parts? If it... Meaning if it's not if each method calls not recursion how can you say the whole thing is recursion? Right? Alright, so let me elaborate a little more. Let me give you an interesting anecdote that actually happened to me and then let me tell you how in the books how the science the literature deals with this. The question is too big too much better than any answer you're going to give. The problem becomes that they're stuck and they don't really know how to deal with it so they give the politically correct answer in the books which they do in other cases as well which we'll give in a second. So the story was the following. When I was in high school there was an elective called computers. It was basic but we had a teacher that was a little sadistic and he used to try to push us to do advanced programs either for well I don't want to say either a sadistic or he actually thought that somehow one of us would get a program that he I hate to say it I think that he thought he could sell anyway whatever the case was we were in high school how much could we do already but anyway so nowadays it's much more advanced but I hate to admit when I went to high school at least the program we were at it was very basic besides the language was basic but it was very basic coding anyway here comes the thing so the teacher gave us an example of let me see was it Fibonacci Fibonacci? I think it was Fibonacci. No. Well, it was either factorial or Fibonacci. Anyway, it was a very straightforward... maybe Fibonacci. So let's imagine it was factorial. Imagine it was factorial. Fibonacci has two, seemingly. But factorial has one variable. So let's imagine it was factorial. I think it was, actually. So it was factorial, and the teacher wanted us to write a program. Now, the mathematics of factorial is defined recursively, right? F of x equals f of n equals n times f of n minus 1. Or some people write it as f of n plus 1 equals n plus 1 times f of n. Okay, the theory people write it the latter way. The programmers, mathematicians, or programmers tend to write it the former way. But it's the bottom line. The next case is based on the previous case, right? All right, so... and then there's a base case of f of 0 equals 1, right? We know that. So I asked myself as a high school student, not because I knew recursion. I didn't. I didn't know it until Professor Ken Lord taught it to me in data structures. And then all of a sudden I knew recursion. But I didn't know recursion when I went to Queens College for undergrad. I didn't know recursion, but I knew that mathematics looked good. I asked myself, what would the computer do if I wrote an if-then-else? And without realizing it, I programmed the recursion. The code is provided in the indirect recursion tab of the notes file. Now, there was a... so, you know, see, I took the formula, I wrote the if-then-else, you know, instead of base case, I wrote if-then-else. And without realizing it, I actually programmed recursively the factorial as you would do. Now, you would probably take care of the case less than or equal to 0. I don't think in high school I was so sophisticated. But I was just... I wasn't going to put negative values anyway. So the bottom line is, without realizing it, everyone else was doing for loops and a lot of code, I just took the formula, put it into an if-then-else, see, and wanted to run it. Now, the code would have worked. It actually was. I didn't realize it, but the code would have worked. Anyway, what happened? So in those years, the academic licenses... You see, people... compilers cost a lot of money. And they were making big business from commercial. But they had to train students, so they had to give universities a break. So they gave the universities either a break in cost, or maybe it was free. But they had... but the commercial people, they were charging money. So they had to, you know, tell the commercial people, Oh, well, the academic version is, you know, is smaller. We don't give them all the features. And that's how they made the excuse of charging so much money to the commercial people, and why they were giving free or very little cost to the university. Anyway, that's what they did. They purposely took away some features. One of the features that our university had, that the high school was connected to a college and part of a university, but one of the features that my high school compiler, university compiler, had, or was taken away because of this distinction, commercial versus academic, was it said explicitly, very clearly, that no recursion is allowed. So I looked up what recursion says, and then I realized that the code that I wrote was recursion. So I look up the definition of recursion, and we just said the definition of recursion is what? A recursion is a function that calls itself, right? So I asked myself, innocently, I wasn't trying to be very bright about it, I innocently said to myself, oh, well, then scenario two isn't all the sum of its parts. So obviously, scenario two is not recursion. So what I did was, I wrote basically the following. All F did, my factorial function, all it did, was pass the parameters to another function, right? Who then called back the first function. So basically, I used a dummy wrapper, and I eked out recursion, not by F calling F, F calling G, and G passing the values back to F. The code is provided in the indirect recursion tab of the notes file. So it made an extra side stop. Instead of F calling F, F called G, and all G did was return F of that. That's all G did. So I wasn't technically doing recursion, because main called F, F called G, and G called F. But my code was doing the recursion. It just made this extra stop. F called F called G, and all G did, one line, return F of, you know, N times F of N minus 1. And that's all G did, that one line, return. Return the recursive step. So it worked, and I was done. And so then I came to the conclusion that scenario number two is not recursion. So this is where I put that parenthesis above. But the star wasn't just because of the star. The star was because maybe the definition should be a function that eventually calls itself. So that's the question. Should the definition be a function that calls itself, or should the definition be a function that eventually calls itself? Because if you say eventually calls itself, F calling G calling F is eventually. Anyway, despite the hopefully interesting comment that I hope you find is interesting, the fact is most books will not insert that word eventually in that parenthesis there with the asterisk. And most recursion definitions is a function that calls itself until a base case is triggered. All right? Now, the bottom line is, the problem is that it behaves often like recursion. And especially if you use my silly wrapper. It wasn't so silly. But the point is, it doesn't do anything. That extra... I just used G to call F, and that was it. It did nothing else. And no local variables had no other purpose. It just simulated the recursion without it. So what the term is... So the point is that it's called indirect recursion. So it is, but it isn't, and they were stuck. Bottom line is that the scientists did not want to change the definition. They defined it as a function that calls itself. Mathematically, that's helpful. They didn't want to change it. So even though it is recursion effectively, it's not recursion technically. So they call it indirect recursion. So as I say, the question is much better than the answer, but that's the politically correct term. So whether we like it or not, that's the term. It's called indirect recursion. But it could affect the same thing as recursion itself. The bottom line is of all of this is that... Now, be careful here. When I said it, that's why I wrote it this way. In scenario number one, what I called an arbitrary generic method call, the other two are also method calls. Recursion and what they call indirect recursion isn't not a method call. It is. The bottom line is that the activation stack, the operating system, treats all scenarios in the same exact manner. And it's specifically accentuated by the fact that only the top method is active. It doesn't even care about what's below it. So who cares that there are other instances of F that are not active? There's only one F active, even in the recursion of scenario number three. But the bottom line is that the operating system treats all three scenarios the same way. And the second point is that the other two methods are also method calls. They're just specialized method calls. All right? So let's say what we take away from this. In summary, one, the operating system does not treat any of these scenarios differently than any other. Okay, then, and the other. Okay, that's one. Two, all three scenarios are scenarios are method calls, are method calls, you know. Okay. Or instances of. Okay, let's say it that way. Are instances of method calls. The latter two are specialized, but they still are method calls. Okay. The last point I want to mention, which is not something in summary, but I've got to tell you, because it's a little disappointing about programming languages, but you should be aware of. But still are method calls. Okay. Now, the last point I want to mention, which is a very interesting point. The comment I just said to you is scientifically, well, is scientifically true, but not necessarily practically true. Okay. The assumption that the top of the activation stack, let me just finish this comment, and then we're done for today. I'll have to clean this up to make the pictures look a little nicer, but I'll try to do that later. The assumption that the top of the activation stack is the only active method is technically true, but unfortunately, sometimes ignored by some operating systems or compilers, I don't know what to call it, what level, who's ignoring, I don't know, but unfortunately, sometimes ignored by some systems. Okay. In what manner? Okay. So the scenario that this plays a role is the following. Consider a method that uses a variable, a local variable, that is not... Oh, let's see it here. That is not defined locally. Okay. So you use a variable, let's say the X, but the X was not defined. Not locally, anyway. So what should happen is that it should be an error. Right? You're using a variable not defined. What should be is an error for using a variable not defined. but what actually happens in some systems rather ignore the integrity of the stack, but some systems the stack policy and will actually descend the activation stack till it finds a locally defined locally declared rather variable of that name and will assume that will assume that that variable is the one referenced above. That is terrible. That makes for really hard debugging. If you have no idea where the memory location is, where the declaration is for a variable, then you are how are you going to know, right? If there's some crazy behavior and you have to debug and you know it's not locally defined and you have no idea especially if it's recursion because it's the first instance. Anyway, that's a problem. Anyway, I'm not going to go into it more, but hopefully in operating systems, advanced operating systems or advanced programming languages they'll discuss this. Alright, anyway, so that's the crazy thing. I have three scenarios. That last isn't a scenario, it's a point. everything I said can be found in any just look up activation stack. I'm not teaching anything so different. In any programming language book or operating system book, but mainly programming languages they discuss this. This last point, I don't know. That's a little bit more advanced knowledge that I read through the literature, the scientific literature. That I don't know. Having access to advanced literature, etc. But the three scenarios that I have, yeah, that's standard