--- Video Title: RAM Model Description: Theory of Computation https://uvatoc.github.io/week11 22.2 RAM Model - Different computing models change the cost of computing functions - Random Access Memory - RAM Model of Computation Nathan Brunelle and David Evans University of Virginia --- Often times we had a lot of flexibility with how we looked at our model of computation and what we could and could not compute. Changes to our model of computation could have a much more substantial effect on the run time. So, for instance, when we talked about and or not circuits, we could implement some functions more efficiently using and or not circuits than we could using NAND circuits. So sometimes changing our model of computation can change how efficiently we can implement certain functions, even if it doesn't change which functions are possible versus not possible to compute. This is one of these places where the Turing machine model kind of violates our intuition for exactly how modern computers work. So, so far, when we've talked about the definition of a Turing machine, our Turing machines use a tape. So we had that semi-infinite tape where we said we have a first location in memory but no last location in memory. So we had like some beginning of the tape but no end of the tape. And if my Turing machine was currently reading, say, that cell of the tape, and I wanted to read maybe that cell of the tape over here next, then I would have to visit every single cell in between before I could read that second location all the way to the right. So if these two cells are, say, distance y apart, then it's going to take me y steps in order to get to and read from location two. Once upon a time, that was a reasonable thing to assume about our models of computation. So this is an image of the university's first ever computer in the 1960s. And here, that is a tape. This is literally like a magnetic tape, a spool of some plastic that's rolled around a spindle. That's where the memory of the computer is stored. So if you wanted to read something that was near, say, the beginning of that spindle, then you essentially had to unwind the entire tape before you could get to that. With this memory technology, I couldn't get to some location without looking at all of the other locations in between, just like with our Turing machine tapes. So once upon a time, say, in the 1960s, those Turing machine tapes gave us an accurate model for what the real running time was going to be for the algorithms we implemented on these older computers. But now we can do better than that. Today, our memory looks a little bit different. So this is flash memory, where this is mostly what you're going to be reading to and writing from as you're actually implementing various functions on your computer and so forth. This flash memory is more similar to saying that we have a table of bits. So we have some sort of two-dimensional table of bits. If I was currently reading from, say, this location in memory, and the next one I wanted to read from is this one, I can just read from that second location in memory. I don't need to visit everything in between to navigate to that. Instead, we have some sort of address. Where the idea is that in constant time, given an address, I can find what was stored at that location in my memory. This is not something that our current Turing machine model allows us to do. Our current Turing machine model says to get from address. A to address B, we have to visit every address in between. Whereas if we have what's called a random access model of memory, so RAM is random access memory, that means we can just kind of probe randomly into our memory however we want and be able to read from any location at any time in constant time. So for most things that we're going to be doing, as long as it's not like a super, super intense computation or a super, super memory-intensive computation, this is going to be a more accurate representation from what we'll experience on a modern computer than this was. So what this does is it actually motivates having a slightly different model of computation so that we can have run times that are going to be more accurate for the technology of the day. What we're often going to be using is something called a RAM machine model of computation. This is kind of an extension of Turing machines or a slightly different version of Turing machines. Or the idea is that instead of reading from the current location in memory and then moving left or right, we can actually say, let's read from some arbitrary address in our memory space. And the important thing to note here about these RAM machines is there aren't more functions that we can compute with RAM machines. Any RAM machine we can convert to a traditional model of computation. The details about how aren't super important. If you want to see them, the textbook has them in section 7.2 in case you're curious. But the idea is that we could have some way of keeping track of the current location in memory. If we had a RAM machine and I wanted to convert it to a tape Turing machine, then I could do that by having some way of keeping track of what is my current address for that cell of the tape. And if I had some sort of target location, then let's compute the difference for that and then move that many steps or something like that. We could have some complicated procedure to figure out how exactly to do that, which is described in 7.2 of the text. But we could take any RAM machine and we could convert that into a tape machine. So we're not getting new functions with the RAM machine. However, we are making some things more efficient. If I wanted to know, for instance, is the last bit of the input a 1 or a 0? If I wanted to do that function with a tape machine, I'm going to have to scan all the way to the end of the input. So that's going to take linear time. With a RAM machine, I maybe can just jump directly to that index of the last bit in the input. Maybe now that's going to be constant time. So with these RAM machine models, even though we don't get new functions, we can get more efficient functions. And these RAM machine models are going to better match your intuition for what run time should be for various programs. And they also typically, not always, but typically match the real world experience with how efficient your algorithms are going to be on an actual computer. So going forward, we're mostly going to be talking about this RAM machine model of computation rather than the tape model of computation, because it will make our lives easier. This allows us to have data structures or indexable arrays and that sort of thing. If we have this RAM model of computation, if we don't have a RAM model of computation, then basically everything is just a linked list or something like that. Then we are working now. Second one. What I've been on CPU.