--- Video Title: What is a Computer? Description: Theory of Computation https://uvatoc.github.io 5.1: What is a Computer? - What does a computer do? - Inputs to Outputs Nathan Brunelle and David Evans University of Virginia --- This course is called Theory of Computation, so it might help if at some point we actually talk about computing things. What we're going to be talking about today when we define computation, our hope is that that thing that we're defining is going to apply equally well to ancient Greek fossilized computers, as well as laptops that you have sitting on your desk, as well as computers that might exist 500 years from now that we haven't even begun to think about what they might look like. So far, we've been defining various things precisely, like natural numbers, sets, and we mentioned kind of in day one of the semester that one of the goals of the class was to be able to think really precisely about computing, to be able to reason about computing, what computers can do, what computers can't do, or what resources a computer might require to do a certain task. So what we're going to do next, we're going to finally start defining what exactly computing means, what is computation, so that we can start answering these sorts of questions. Let's get started with a little bit of a discussion. So when we talked about natural numbers, we kind of had some idea for what a natural number looked like, how it operated, how addition worked, and that sort of thing. And we kind of took a step backwards and said, we have this intuitive idea for what's going on, but we don't have a precise description for what's happening. So what we did is we came up with some definition for what a natural number might mean, such that it would kind of align with that intuitive behavior that we saw of these natural numbers. So we'd like to do a similar thing with computers. We'd like to have some sort of definition of computers that is going to align with, hopefully, what our intuitive idea is for what computers are, what computation is, what computers do, and that sort of thing. If we had a good definition for computation, what's something that we would want to see that a computer does, that we might want to capture with that? What are some thoughts? Yes? So we want to take input. Okay. So take input. If I can write on this thing. What else? Yes? We want it to be programmable. So we want to be able to program it, or potentially reprogram it. That is two words, I promise. Yes. So it should have some sort of memory, maybe. What else? Non-trivial output. Okay. So it should be able to do output, and maybe it's interesting output of some kind. What else? Yes? Okay. So it needs to be repeatable. So whatever it does needs to be repeatable. That every single time you run it, it should give me the same answer. Maybe we want that to be a behavior. If you had to prioritize these, which one would you think is most important? Input, be able to reprogram, memory, output, repeatable? Does anybody have an idea for what you think is going to be the most important one of these? If you had to pick one, yeah. Repeatable. Repeatable? Why do you think that that's the most important? If you don't know what the relationship is going to be between the input and output, can you even call that a computation at all? Is there ever any sort of advantage to being able to do things where it's not going to be the same every time? Like, for instance, a situation where we might not be repeatable is if we have some sort of randomness. Is randomness ever helpful with computation? Yes. So there are certain things that we might want to do with a computer where we assumed that that was going to operate on something that was chosen randomly. Maybe we could say that things like cryptography, where we like randomness, we could obscure this a little bit. We could add this into our model of computation by saying, well, the input to output, that change from input to the output, that's always going to be the same thing. It's just you happen to choose one of your inputs randomly. So maybe we could do something like that, and that would add in this notion of randomness. As an aside, typically the way that we handle randomness versus determinism with our models of computation is we can think of maybe we have some ways to think about computation where randomness exists, other ways where randomness doesn't exist. These are just different ways that we might consider what a computer is going to do. I'm going to give kind of a simple answer, what I think is going to be the simplest answer for what it is that a computer does. I like to think that this is possibly the simplest kind of viable idea for how computation works. So we're going to say that a computer... so we're kind of intuitively defining what a computer does. We're not precisely defining it just yet. But a computer is going to be something that, we'll say, performs a function. And what kind of function is that going to be? Well, that maps a string to a string. So it maps strings to strings. So we have some sort of function, just like we saw before, where the type of input for that function is a string. And the type of output for that function is a string. Any thoughts on why we pick strings? Why strings? When I say string here, I very much mean like the. Python or Java definition for when we say a string. So a string is literally just a sequence of characters in order. Yes? Exactly. So typically when you think about what your input is, even if you thought, oh, this input is an integer, or this input is a photograph, or something like that, no matter how complicated it was, you could always think of it as a string instead. So string we like because it's kind of the most generalizable of things we could be thinking of. You have an integer. Well, you could say that your integer is a string of the character 0 through 9, if you're in decimal, or 0 or 1, if you're in binary. Or if it's English text, then it's a string of your English alphabet, and so forth. So we're going to say our function maps strings to strings. We're going to have our input. This input is going to be a string. And we're going to have output. That output is also going to be a string. This thing in the middle, that is kind of going to be our computer. Our computer is basically that process that relates the input string to the output string. So a computer is going to be a process that performs some function, which is the relationship between the input string and the output string. This is our super stripped-down, simple definition for what computers do. They map input strings to output strings. And we're going to be talking about processes that do them. This black box, that's essentially our computer. And there are lots and lots of different ways that this computer could actually manifest itself. So we can write down a function however we want. We can use our mathematical notation to write down the function. The computer is specifically what is the process that we could use to do that relationship, to make that relation between the input string and the output string. So that is our computer.