--- Video Title: Some Functions are Expensive Description: Theory of Computation https://uvatoc.github.io/week5 11.5: Some Functions are Expensive - Theorem 5.2: Counting Programs - Counting Functions (how many functions from n-bit input to 1-bit output) - Theorem 5.3: Some Functions are Expensive David Evans and Nathan Brunelle University of Virginia --- Our goal is to understand, are there programs that it is expensive to implement? Expensive means the size of the circuit is scaling exponentially in the size of the inputs. Now that we've said what this theorem actually means, there exists some function in that set, and we can pick some constant, let's pick 9 because 10 is too hard to write, where that's probably true. And if you remember, we can represent these programs, each line, in a NANCERC program, we can represent as just a triple of variables. A NAN gate has 2 inputs, 1 output. In a NANCERC program, that's all we can do. So every line is just 3 variables, how we can represent every NANCERC program, and the length of a line is just 3 times the length of a variable. And because, in the worst case, Each line introduces three new variables, which wouldn't really make sense, but we're just trying to get a bound here. We know that each line we can represent in 3 log 3s bits. This is log 3s. So we have at most 3s variables. We can represent each variable with log of 3s bits, and we have three of them. So that means each line is this long. The total number of bits for an s-line program is less than or equal to 3s log 3s. And that's what ends up in this equation. So all of those bits can be 0, 1, so 2 to that number is the number of possible programs. The big O here makes this more confusing to read. We actually plugged in the numbers from this, and because it's a less than, this would be fine to say 9s log s. We've just shown some way that all the possible programs of s-lines can be represented with that number of bits. Yeah. Why is each variable log 3s? Okay, so we can think of there being a table of variables. We've got to identify the variables, and we're going to just call them variable number 0, 1, 2, 3. Just give each variable a number. How many different variables can there be? A really rough, easy way to say is, like, each instruction has three variables. If we don't care anything about dependencies or anything else, there can be three new variables for each instruction. Because two of the variables are inputs, probably it would be safe to say there's only one, but then we've got to do something special to count the number of inputs and deal with something else. Let's just say three. So, the size of this table is, at most, 3s entries. Now we just need to represent the numbers from 0 to 3s minus 1. So, again, we can do that with log of 3s bits, and this is base 2, base 2 log. We're not trying to get the tight bound here. We're just trying to get some upper bound to know that the actual number of bits we need is less than or equal to this. So, that's the number of programs. This is where we've got to be careful to distinguish programs. We've got functions, and we've got programs. Size of s is the functions that can be computed by programs. So, that is counting functions, but it's counting functions that correspond to programs. So, it's really counting the number of different programs. This is an upper bound, because if we said it was an exact count, we would have to know each program corresponds to a different function, which they do not. Many of those programs, we might have two programs that have different bit representations that compute the same function. So, if we were trying to get a tight bound, we'd have to worry about that. We're not. So, we know there are no more than that number of functions that we can compute with s lines. Everyone happy with that? Everyone unhappy with that? This is a result that says our circuits are not powerful enough. There are only so many functions we can compute. How many functions are there? Maybe we should be happy with it, maybe this number of functions that we can compute with our circuits is big enough that we can compute all of the ones we care about. So how many functions are there that take n inputs, these are the inputs, and map them one output. Our outputs are either zero or one. So a function... it's not a function. Functions are going to be mapping whatever inputs are to those outputs. So how many are there? Yeah. Sorry? Two to the n. Two to the n possible inputs. The function has n inputs. Each is one bit. There are two to the n inputs. And each one of those inputs, what does the function do? Because it's a function that has to map each one of those inputs to either zero or one. So how many different functions are there? Yeah, two to the two to the n. Any different way of mapping each of those inputs to a different bit as the output is a different function. So the 2 to the 2 to the n functions. That's a lot of functions. Remember, our number of programs, well, is less than 2 to the something. Do we get to a result that says there are functions that are expensive to compute? And that's what theorem 5.3 is doing. And it's really a simple counting theorem, but because of some of the notation and the definitions and the way things might be written, it's not as simple to see that that's all it's doing as it might be. So all we're doing with this theorem is saying that there must be some functions that are functions of n variables that cannot be computed by that circuit. From our definition of big O, we have there's some constant C. And in our other definition, we picked a specific one. In this case, we don't really need to. And then what we're trying to do here is to show that there's a function that's not in that set. So we're just going to plug into that definition. This is the size where S is now. There's a new constant delta. N is the number of inputs. We're picking S as this, right? So this is a fairly arbitrary choice, but it's a choice that's done to make it work out. And plugging it in, we end up with, so now we're just plugging into this, right? This is plugging into this, and we're simplifying. So now we have 2 to the C delta. So remember, C, we don't get to pick. But we do pick delta. So what should we pick as delta to simplify this? Yeah, delta is 1 over C. Then we can throw that out. And it just said there is. We can pick any delta we want. So delta is 1 over C. That's going to be 1. And now we have this. And this, this S here is the whole thing. But this is definitely less than 2 to the 2 to the N. We don't have to work out exactly what the log formulas and things are. We should be able to see that, yeah, that's going to be less than 2 to the 2 to the N. And that's all we need to know. If the count of the number of functions that can be implemented by an S-line program is less than the number of functions there are for sufficiently large N, right? And the sufficiently large N is N is greater than N0, which is picked from the video. We have this property that there must be some functions that cannot be implemented by those circuits. It's very exciting. And this is the second one.