--- Video Title: Computability in Theory and Practice Description: Theory of Computation https://uvatoc.github.io/week9 18.6: Computability in Theory and Practice - An Erudite Debate on Computability - The difference between asking if something can be "computed" and if a function is "computable" - What we actually proved in proving ACCEPTS is uncomputable - Previewing other uncomputable functions David Evans and Nathan Brunelle University of Virginia --- Science. What is it all about? Technology. What is that all about? Is it good or is it whack? There's a bloke from around my hood, Staines, called Rainbow Jeremy, who reject everything to do with science. He just chill at home, he smoke his own homegrown, and check this, he don't have a telly. I ain't shitting you. He don't have a telly. He lives in a house, though. And that house is a product of technology. No, he ain't got no technology in it. You can check out his website. He ain't got nothing in it. The house itself. He wears clothes, shoes, he eats food. Will computers ever be able to work out what mill out of me. Hmm... multiplied by... I am finished, you don't know what I was going to say. The answer is yes. The most powerful computer in the world today does 36 trillion operations a second. So would it be able to multiply 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 you don't know what's gonna say. Multiply it? You don't even know. 10 10 10 10 10 11 11 12 13 14 18 9 28 100 100 100 100 100 100 100 100 100 100 000 100 hundred yes million yes thousand yes million yes whatever numbers you want it will be able to multiply. 0.999998. Without blowing up. Yes. Gonna move it on a little bit. The rest is good, but you'll have to watch it yourself because it's not as relevant to this class as that part. So what do we think? Who's right in this intellectual debate they're having about computability? What's the big disconnect they have? There's actually a lot of quite interesting things in that video. Every time you watch it, it's funny. So you can watch it over and over again. But he defined this function. So he's got some function. It's kind of, you know, taking some long... I guess he had some eights in there. Like a lot of nines. And what the function is supposed to compute. It's a function that takes in two inputs and it should output their product. Is this computable? Can we compute multiplication? Sure. You learn an algorithm probably in third grade or so that can multiply any numbers no matter how big they are. We definitely can compute them. For any inputs, we can define a Turing machine that makes that computable. So does that mean a computer can actually compute it? The rude guy in the video who interrupts Ali G before he finishes describing his problem says, Oh, the supercomputers can do megabillion computations per second. Is that relevant to whether it's computable or not? Yeah. Not relevant at all. But it is relevant to the question Ali G is asking, right? He's not asking the theoretical question, is multiplication computable? He cares about his actual specific input. If we're talking about specific inputs, then we're talking about something very different. Whether a particular machine can compute some particular instance of a problem. It's all about that instance of the problem. So it does matter how big the numbers are. And he definitely got past what most of our everyday computers compute correctly, depending on how you do multiplication, but didn't get past what the supercomputer could do probably yet. But eventually he could have. So if you're asking like what any physical computer we have access to today can compute, that's a question about a particular instance. There's some largest instance of this problem that we cannot compute today. So we have to be very careful what we are talking about. When we say a function is computable or not, and when we ask the question whether we can compute something. Those are quite different questions. So certainly there is a Turing machine that can compute this. What it means to be computable is with infinite resources, we can solve this problem for any size inputs. And our Turing machine never runs out of memory. We have an algorithm that's always going to work. So now we showed uncomputability. We showed accepts is not computable. If there was a next segment of the video where they were talking about accepts, could you interrupt L.A.G. and tell him, no, there's no answer before he finishes describing his input? What did we actually prove about accepts? I can tell you the right answer for accepts for some inputs. If your input is, let's say it's... And I know that does not encode a valid Turing machine, I can actually tell you for any x that accepts that that is zero. So I don't even have to hear the rest of x. Once I know your w input, for lots of w's, I can tell you the answer. For some, I have to wait till I hear your exit input. But maybe I can still tell you the answer. So we certainly didn't prove that we can't ever answer a problem instance correctly. There are lots of ones we can answer correctly. What do we actually prove? Yeah. We proved that there's no machine that can correctly produce the output for every instance. We found some instance that led to a contradiction. And we found a very specific instance. We found this particular input, the input that leads to the contradiction, wd is this. We found that that input d doesn't work on. And that meant that a doesn't work on that input as well. We found one input. We don't even know what it is, right? If we actually produce this machine, we might be able to figure out what it is. It's just one input that we showed. We can't have a machine that correctly computes the output for that input. That's very different from saying for a particular instance of the problem, we can't solve it. Most instances of accepts, we probably can solve. Anything that you could solve as a human, you could certainly write a program to solve. Unless you're using some magical intuition. But anything you could solve by thinking about it, you could write a program to solve. And certainly we could write a program that for many w's and many x's would do accepts correctly. What's one simple program that would do it? Simulate it for a million steps. If you've accepted by then, you accept. If you haven't, you don't know the answer. But for many inputs, you're going to get the correct answer by doing that. And maybe simulate for a billion steps, you'll get it for more. But you could do more clever things to get more answers correct. So we've got one uncomputable function. We can get lots more. Once we got one, we could get more by doing the same kind of contradiction. We could also get more by saying, well, now we've got one thing that we know is uncomputable. The first strategy would be for any new problem, try to find a contradiction. That can be pretty hard. We've got to play games with self-evaluation and find the right construction. But now that we've got one, we have an easier strategy. We already know this is uncomputable. So if we can construct a machine that solves accepts, that computes accepts with a machine that solves the new problem, then we know the new problem is also uncomputable. And it turns out that's something that we can do pretty easily. At least once you understand this, you can see that almost every problem of this type, we can do the same way. And that we will continue next week. But I will preview the theorem that Rice Hall is not named after. But you should think of it being named after, unless you know the Rices that put up the money. Basically, it says that any interesting property about a program is uncomputable. Because if we could compute it, we could also compute m accepts or m halts or any of these things that we know are uncomputable. So we'll start seeing examples of that next week. We'll see you next week. We'll see you next week.