--- Video Title: Modeling Computers Description: Theory of Computation https://uvatoc.github.io 5.2: Modeling Computers - How should we model computing? - Programs and Hardware Nathan Brunelle and David Evans University of Virginia --- There are several different ways that this black box could manifest. So right now, we've just said it's some process. That process could look like a variety of different things. Different ways that we could manifest that process, we call those different computational models. So the computational model is how that process is actualized. How is it implemented? So a computational model is a way of implementing that process. Of computing. What are some examples of computational models that you've seen before? Ways that you could manifest a computation. Yes? Okay, so maybe if you've seen it before, then maybe you have a Turing machine. That is one possible way that this could work. Yes? So a finite state machine. Okay? Yes? The Babbage analytical engine. Babbage is short. So I'm going to write that. What else? Yes? Okay, so maybe with logic somehow. This, by the way, is what we will be talking about today. How many of you have used any of these before? Raise your hand if you've used any of these before. So maybe about half of you have used at least one of these before. So how is it that you could be in a 3,000 level computer science class, but all of the models of computation, all the ways you could actually do computation that you've listed, none of these before. How could that be? None of you have used before. How could that be? We must be missing something. What's another computational model that maybe you've used before? Yes? Your fingers. Okay, so maybe your fingers. Maybe you can use your fingers to do computations somehow. What else? Yes? A laptop. So we could say computer hardware. The hardware of your laptop. One way that a computer could manifest is as some hardware. You can say it's like integrated circuits with transistors. So hardware, or we could say CPUs. That is one way we could manifest a computation. Circuits and CPUs and that sort of thing. What else? What else have you seen? Yes? An abacus. An abacus? How do you spell abacus? Abacus. Close enough. What else? So how about this? Java. Java is a model of computation. That is a way that I can actualize a process. Java code is one potential model of computation. Python is another. Or x86-64 might be another. That's machine code. The type of code that your computer most natively speaks. All of these are models of computation. So we can use all of these to write down algorithms. To define algorithms. To define a process for how to map that input string to that output string. The most common ones that you're going to use are probably CPUs. Because that is right now the fastest way we have of implementing computer hardware. So CPUs or transistor circuits and that sort of thing. And in this class we're using primarily Python. So these are kind of the most common models of computation. We're not going to be starting with those though. So we're not going to be starting with CPUs and Python because those are complicated. That's hard. It's hard for us to formally define what a CPU is and how they operate. Or what Python is and how it operates. So we're going to start with something a little bit simpler. And then hopefully we can work our way up to being able to talk about what are the things that Python can and cannot do. There are going to be things that you might want to do at some point in your life. And you can never write a Python program that will be able to do that. We can demonstrate that in a formal sort of way. But that's not where we're going to start. We're going to start with logic. Computers are based on logic. And when I say logic, I mean Boolean logic. Ands, ors, nots, that sort of thing. We're starting from there. Typically when we look at our categories of models of computation, I personally like to break them into two categories. So we have programs or languages, like Python. And we also have hardware is another way that we could define our model of computation. And this is like CPUs or transistors, electricity, that sort of thing. It turns out that they're going to, in some sense, be equivalent to one another. They are just two different ways of writing down the same computations. We're going to find that anything I can write a program for, I can define some hardware that's going to do the exact same thing and vice versa. So if they're the same, if we can do the same thing, how are they different? Why do we want both? What are some ways that Python is different from your computer hardware? Yes. Python needs hardware. Python is less concrete, let's say. So if I decide that I have a bug in my hardware, or an inefficiency in my hardware, then I basically have to throw it away and start again. Not so with Python. What else? Yes. Python, I'm going to generalize this statement a little more. You can represent hardware graphically. I mean, it exists. I can draw you a picture, right? But let me expand from this a little bit. But Python is good for humans. Try programming with magnets and see what happens. Like, Python is really good for humans. It is designed to be readable by humans. The CPU, though, is ridiculously fast. That's kind of the thing that we're going to be using to actually physically perform that computation. And it is ridiculously fast. We like to do CPUs. We like to define computations in terms of electricity and circuits and transistors and so forth, because it is fast. So, when we're defining these models of computation and we have languages and we have hardware, we really, really care about the relationships between these two, because humans are bad at hardware. So, humans are bad at hardware. Humans are better at programming languages. If we have this relationship between the two, where we can prove that they're, in some sense, equivalent, that is finally the thing that allows us to say, anything that I wrote a program for can run on my CPU. The opposite is also going to be true. I can design CPUs using programs which simulate them. So, because these are going to end up being equivalent to one another, I can represent CPUs in programs, and I can represent programs in CPUs. And this is essentially the entire model for how we do computation today. You, as a human, write things in your programming language. And then, because that programming language can be converted into some hardware, we're going to run that program on that hardware. And if you were, let's say, a computer engineer, and you were trying to design some sort of new processor, the way that you're going to be designing that new processor is you have maybe some Python code that can look at that processor and simulate its behavior for you. So, the fact that we can do this is what enables us to make the progress that we're able to make in computation. And what we're going to talk about today is we're going to talk about a simple programming language and a simple hardware that maps with it, that corresponds to it, and show in some really important sense that those are going to be two equivalent models of computation. For today, the actual services right now, for example, people are using the principles and the components of