--- Video Title: "Difficulty" of Functions Description: Theory of Computation https://uvatoc.github.io/week11 22.1 "Difficulty" of Functions - How can we categorize the difficulty of functions? - Different computing models - The complexity class TIME Nathan Brunelle and David Evans University of Virginia --- So far, we've been talking about what is possible versus impossible with Turing machines. And what we're going to do next is kind of a similar pattern to what we saw with NAND circuits. So when we were talking about NAND circuits, we talked a lot about what you could versus could not do with NAND circuits. We also talked a lot about how efficient it was to do various things with NAND circuits. So the next thing we're going to talk about with Turing machines is we're going to talk about how we can measure efficiency of certain functions or certain problems on Turing machines. We've already looked at one way of kind of defining how difficult it was to compute some problem, some function, which is based on maybe whether or not it could be computed by some model of computation. So one way we could ask how difficult is it to compute this function is we could say, can we implement that function using some model of computation? So for instance, I could answer this question of how difficult is F by saying whether or not I could compute F using a NAND circuit. And we mentioned that we can compute F using a NAND circuit if and only if F was a finite function. So we have a nice clear answer for this NAND circuit thing, a pretty nice clear answer. We kind of vaguely mentioned something for finite state automata where we could say, what does it mean if I can compute F using a finite state automaton? We said that vaguely that meant that it didn't really require any sort of memory to compute. So we briefly mentioned finite state automata and what sorts of things those might be able to compute. And for Turing machines, we kind of gave a more convoluted answer. There wasn't a super tight, precise answer for what could and could not be computed with a Turing machine. But we talked about ways of demonstrating when certain problems were not going to be computable. For instance, if we asked, can the function HALT be computed by some Turing machine? We knew the answer to that was no, because we had some sort of paradox. Same with, like, ATM and Busy Beaver. And we talked about many other things that we couldn't compute with Turing machines because we used some sort of reduction. So we've talked about ways of answering this question of, can this function F be implemented using a computing model? And that's one way we could talk about the difficulty of F. F is so challenging that I needed a Turing machine in order to compute it. I couldn't use just a NAND circuit, for instance. That's one way that we can answer this question. Another way that we could answer this question is, how efficient is that function going to be for such a computing model? What sorts of resources are going to be required by that function in order for us to compute it using, say, the most efficient implementation that we could have? For instance, with NAND circuits, we could ask this question of, how many gates were going to be required for that function? So if we wanted to get a little bit more fine-grained about how difficult it is to compute a function, we could ask this question, what is the optimal number of gates that are required in order to compute that function? Or what is the shortest depth of the circuit? Or something like that. We could ask these questions about NAND circuits in order to get an idea for how difficult it was to compute this function. For Turing machines, the next thing we're going to talk about, typically the way that we're going to be measuring the efficiency is using two metrics. The first being time, the second being space. In general, whenever we're looking at the efficiency of computing something, we're always counting. So always efficiency is a count of something. For Turing machines, typically that count is going to be the number of transitions that you had to make inside that Turing machine. So to figure out the running time of some algorithm on a Turing machine, we're going to count the number of transitions that machine might have to make. For space, that's typically going to be the number of cells on the tape that you required. What was the maximum number of cells that you were going to occupy? So whenever we're looking at the efficiency, we're counting something for. Turing machines, it's transitions for time. Today, we're going to be exclusively talking about time. When we're looking at the running time for various functions on Turing machines, we're not going to be using some number. The running time is typically not going to be 5. If I say, what's the running time of this algorithm or this Turing machine? Our answer is usually not going to be 5. It's not going to be just a fixed number. It's instead going to be a function. So we're going to use a function. The reason for this is since Turing machines will be implementing infinite functions, so they can have unbounded input, we're typically not going to be able to say, no input is going to require more than 500 steps, because what happens if I gave you an input longer than 500 bits? That's probably going to take more than 500 steps just to read that input. So instead what we do is we say that as the Turing machine gets larger and larger input, that machine is going to require more time or more transitions in order to actually compute the output for that input. And our question is, how much larger is the number of steps compared to the size of the input? The way that we're going to end up measuring the running time of our Turing machines is by counting the number of steps required as a function of the input size. And in particular, we're interested in how those two relate to one another. If I have an input that's, say, twice as large, how many more steps am I going to do in order to compute that function? So to get a formal definition of this, a formal definition of the running time, here we're going to say that this T of n, which is a function mapping naturals to naturals, that's going to be our running time. This function T of n that takes as input the length of the Turing machine's input and gives as output the number of steps that Turing machine is going to use before it can give the output to the function. And if I had something that I was trying to compute, let's say that I was trying to compute f, we can say that f is computable in T of n time, so long as there exists a. Turing machine, such that once I look at large enough n, so this is kind of like with asymptotic notation, I don't care about what happens for small n's, just for sufficiently large input sizes. So such that for every large enough n, if I took an input of length n, an input x that was of length n, I could guarantee that m was going to halt after, at most, T of n steps before giving the right answer. So this is how we define our running time. We say our running time is T of n, provided that for any input x that's of length n, that machine would run in T of n or fewer steps before giving the right answer. So that is our formal definition of runtime. It's answering this question of, what's an upper bound on the number of steps it's going to do before giving an answer in terms of n, the input size? Question. Yeah, so T of n is undefined for Turing machines that don't halt because the output of this function needs to be a natural number, and like infinity or something like that isn't a natural number. Typically, you only are allowed to look at efficiency for Turing machines that do halt. Otherwise, they require, say, infinite resources or something like that. We can talk about complexity classes for our running times as well. So just like for NAN circuits, we could talk about sets of problems based off of how many states they required in order to implement. Here, we're going to talk about sets of functions by looking at how much time they're going to need in order to compute them with a Turing machine. So this time T of n represents the set of all Boolean functions that are going to be computable in T of n time. And we're going to look at some examples of these. So in general, this is a similar pattern to what we saw with size for NAN circuits. More time gives us more functions. Because we said in our definition of running time, T of n was sort of an upper bound, we said that our machine had to halt within T of n steps if our input was of size N. That means that if something halted within, say, 10 N steps, then once N got large enough, it was also going to halt within N cubed. So as we get asymptotically more time, we're going to start getting more and more functions in these. So time of 10 N is a subset of time of N cubed, which is a subset of time of 2 to the N, and so forth. See you next time. Well, bye! Bye!