Lec 22: Turing Machine Variants & Cook's Theorem Setup - CSCI 381 - Goldberg
Administrative Updates
Section titled “Administrative Updates”Same as that of Lecture 20.
Finite State Automata
Section titled “Finite State Automata”An automaton (machine) is a weighted graph. Machine is over an alphabet and is a weighted graph:
- (vertices) are called States, generally described as
- (edges) are called Transitions
- - The “weights” associated with each edge are the letters in the string that allow the machine to go from one state to another.
- Start State: - All processing starts here.
- Final States: Subset - For a string to be accepted by the machine, the processing must end in a final state.
- Note: Subsets can be empty, so it is possible that a given automaton does not have any final states.
The language of the machine consists of any strings (over alphabet ) that, when starting in the start state and processing each character of the string according to the “rules” embedded in the weights associated with the transitions (edges), the machine will end up in a final state.
More formally, a machine (automaton) is defined as:
- - States (Vertices)
- (delta) - Transition function - specifies edges called transitions
- - Alphabet / Tokens (letters or characters) - weights of the transitions/edges
DFA Example
Section titled “DFA Example”Consider the following DFA:
- (States):
- (Alphabet):
- (Start state):
- (Accepting states):
State transitions:
| From | Token | To |
|---|---|---|
Processing examples:
ababbcc
Reject since is not a final state; the issue of “final” state is only relevant after the complete processing (parsing) of the input string.
ababbc
Accept since is a final state.
bcb
State with input "" is not defined (cf. partial nondeterminism); reject even though is final. cf. Tom Hanks in The Terminal.
Notes and conventions about strings processed by an automaton (machine):
-
- States are vertices and conventionally labeled as .
- The machine always starts in a specific state (node) termed the start state,
- and this state is identified by a ”>” marker next to it.
- Conventionally, the start state is labeled .
- The states of this machine are given a status as to whether the state is a final (accepting) state or a nonfinal (rejecting) state. This status is significant when processing a string, determining whether or not the inputted string is a member of the language accepted by this machine (detailed next). Please note that checking whether a state is non/final is ONLY done AFTER processing the entire string.
- The processing of an input string (where are alphabet characters and dot is the concatenation operator which simply indicates which character follows in sequence after another character) is based on which state the machine ends up in when following the transitions (edges) of this graph following the characters, one by one, from state to state.
- If the machine ends up in a final state, the string is processed and accepted as a word (member) of the language of this machine . If the machine ends up in a nonfinal state, then the machine rejects (not accepting) this string and does not consider it a member of its language. The rules that govern which state you enter into after exiting a current state because of an inputted character is user-defined by the state transition function . If not defined, then the string is automatically rejected.
- This above diagram (purposely) deviates from the typical example that will be first explained in an automata course in the following manner: The alphabet for this machine seems to be since those are the only letters used as weights for the edges. (Character weights on edges are called Tokens.) PRESUMABLY, for each state, the diagram should state (i.e., an edge should exist) for each letter. Yet, for example, with a token “a” or “c” to be processed does not specify what would happen (i.e., no weighted edge with “a” or “c” exits from state ). Likewise, no edge exits state with the letter “a” etc. What does the machine do when it is in a state and the input is a letter that no information (no edge) is provided to tell the machine what to do? The machine simply stops the processing immediately and thereby summarily rejects the entire string. This would be the case EVEN IF the state we are currently in is a Final State. It still will reject the string because it could not effectively process the entire string.
- If with input token “a” had two choices, for example to go to (as is now) and also to (which isn’t here), then which state does it go to? One of the two, Both, or None? The answer to this question is based on the concept of “nondeterminism” to determine which is the best pathway to pursue.
Chomsky Hierarchy & Machine Variants
Section titled “Chomsky Hierarchy & Machine Variants”Noam Chomsky (1965) identified four levels of computable languages (not including the Unsolvable/Noncomputable level; although not included in the Chomsky Hierarchy, some textbooks include them as an outermost ring and call them type -1):
| Type | Name | Machine | Auxiliary Memory | Other Actions | D vs. N |
|---|---|---|---|---|---|
| 3 | Regular | Finite State Automata | None | None | |
| 2 | Context-Free | Push Down Automata | Stack | Push/Pop | |
| 1 | Context-Sensitive | Linear Bounded Automata | Tape, linear in length | Left/Right/Print | |
| 0 | Unrestricted | Turing Machines | Infinite Length Tape | Left/Right/Print |
The previous sections describe the essential automata/machine that is at the core of each of the above machines.
Potentially, there can be an additional alphabet (termed output alphabet) for the memory storage, separate from the input alphabet. This is most pronounced for PDA. Thus:
where , , , , have all been defined by finite state automata, and additionally is the stack alphabet - the characters that will be pushed/popped from the stack during the transitions. The stack alphabet can be the input alphabet as well.
Both Context-Sensitive and Unrestricted use a tape, and when using tapes, the input and output are what are on the tape. Thus, for these automata, is not a special alphabet but the input alphabet. However, there is a need for to contain one or more extra symbols termed markers:
-
- A special blank symbol (not , the empty string) to indicate that the current tape cell does not contain an alphabet letter. This is the case when purchasing a new tape, for example.
- Extra marker symbols which can be used to organize the data better on the tape (Davis tends to avoid using these; Cook uses this extra marker in his paper). A simple example would be if the input/output contained multiple data values - it would be easier to identify the start and end of a given data value storage if it was delimited by special marker symbols:
#Input1#Input2#...#Inputn#and likewise#Output1#Output2#...#Outputn#
Although the last machine (Turing machine) has an infinite amount of memory available, one cannot actually use the “infiniteness” of that memory, since that would require an infinite loop, which goes against Turing’s notion of an “effective” computation - the machine MUST halt (stop) before giving any output.
When we say “computer”, the computation model actually used in all of classic computer theory is that of the Turing machine. The Turing machine is an automaton that has an alphabet and a graph-based machine termed the transition graph, or stored in a transition table (similar to an adjacency matrix for weighted graphs). The nodes/vertices of this graph/automaton are called States and the arcs/edges are called Transitions. The machine has an auxiliary tape (infinite in both directions) that the input is read from, the processing scans and reads from, and the output is printed to.
The Turing machine is comprised of two components: the state diagram including the transitions, and a separate memory storage device (the “tape”). The state diagram portion is the exact same weighted graph that exists at all levels of languages in the Chomsky Hierarchy. The Hierarchy has the Regular languages at the lowest level and then Context-Free, Context-Sensitive, and Unrestricted. The lowest level (Regular) can be processed by a finite state automaton - precisely the weighted graph we are referring to. What it does not have is memory to store values such as which state the current transition came from and which letter was just processed to get to the current state. This same machine is identical for each machine that processes the higher levels of language as well; the difference between the machines is whether and to what extent there is a separate memory storage.
- The finite state automata (for regular languages) does not have a memory storage.
- The Pushdown Automata (PDA) that processes Context-Free languages has a single stack as a memory storage.
- The Linear Bounded Automata that processes Context-Sensitive languages have a tape as a memory storage, but this tape is “linearly bounded” in size so that the amount of storage on the tape is where is the length of the original input.
- The Turing machine processes all computable languages and processes the highest level (and therefore certainly the lower levels) of languages. The Turing machine has a tape, infinite in length in both directions, providing as much memory as necessary to process any computable language.
Regarding the state transition graph component of these machines, we only discussed a single variant - deterministic versus nondeterministic: at the lowest (regular) and highest (unrestricted computable) levels of language, deterministic machines solve the exact same languages as their nondeterministic alternatives. Nondeterministic PDA actually process more Context-Free languages than deterministic PDA. For Context-Sensitive languages, it is unknown whether or not nondeterministic linear bounded automata process more languages than their deterministic alternatives. This is all regarding the state transition graph. What about variations in the memory storage?
We will now explore a number of variants of the Turing machine tapes; Martin Davis proves for each such variant that they all compute the same algorithms as the original Turing machine version without the variations. An interesting memory storage variation exists for the Pushdown Automata. Recall that the PDA is a state transition graph with a single extra stack as a memory storage. What if we added one more stack and could independently interact with two stacks? Martin Davis proves that a Pushdown Automata that has two stacks (instead of the one provided by definition of a PDA) is actually equivalent to a Turing Machine!
Push/Popping can simulate moving the tape left/right, and similarly simulate printing. The intuition is that the two stacks represent the left and right halves of a single tape, with the tops of the stacks meeting at the current scanhead position. Moving the scanhead left corresponds to popping from the left stack and pushing onto the right stack, and vice versa for moving right. Printing (overwriting the current cell) is simulated by popping the top symbol off one stack and pushing a new symbol in its place.
Cook-Levin Theorem
Section titled “Cook-Levin Theorem”Cook (1971) proved that ALL NP-complete problems reduce to Satisfiability. (cf. Karp, who had to prove reductions from Satisfiability and its descendants one by one.) Satisfiability is finding values for the variables involved in a Boolean Predicate that cause the Predicate to evaluate to TRUE on those values. This should sound like “checking a checklist” - verifying that all the conditions have been met. Any nondeterministic program first “guesses” the (correct) solution, but we need to manually verify that the “guessed” solution is TRUE. For NP, this verification process must be computed in a deterministic polynomial amount of time. As long as you have a correct verification procedure for a specific NP-Complete problem, then Satisfiability solves the other problem for you by “guessing” values that will make the verification procedure evaluate to TRUE.
To specify the classic Turing machine, you need to provide a set of rules that govern the transitions of this machine. Originally in the following format:
The standard transition function is embedded in this quadruple . Since this has four pieces of information, the machine is called a Quadruple Turing Machine:
“If you are in state and the scanhead pointer is reading from the tape the symbol , then the tape should (either move Left, Right, or print over the symbol with the symbol , all of these depending on how the user defined this rule), and then transition (move) to state .”
There are other variations of Turing machines that are equally powerful in terms of computability to the original Quadruple Turing machine:
-
- Quintuple Turing Machine: - For the application of each rule, you MUST print and then you MUST move the tape.
- One-directional infinite tape: Instead of a tape that is infinite in both directions, use a tape that is infinite in only one direction. There is a definitive index for the tape; if infinite in two directions there is no index per se since there cannot be negative indices.
- Two-dimensional tape: The tape is two-dimensional and infinite in both dimensions and in all directions (cf. the “professor” in the cartoon Tennessee Tuxedo And His Tales who had such a board to write equations on; analogy is an Excel Worksheet with an infinite number of rows and columns).
- Multi-track tape: A tape that allows storing symbols in any one spot, one in each of the tracks, and therefore requires heads to read all of these characters together when scanning the current spot.
- Multi-tape machine: A machine with multiple infinite tapes and a scanhead pointer for each tape.
Cook’s proof would be simpler if we used a combination of Variant 1 (quintuple) and Variant 2 (infinite in one direction only). cf. Cook’s original proof, which has no diagrams and relies completely on mathematical notation to represent the tape. The scanhead pointer was simulated by inserting the letter in the middle of the string representing the tape (on the assumption that ). Likewise, he would have to insert a marker representing the current state. We will store them in separate columns and not with the tape.
To understand what Cook’s proof needs to verify, think of it as “playing computer” - manually simulating the Turing machine’s full execution step by step and recording the complete machine state at each clock cycle in a table. Each row below is a copy of what is on the tape at time :
| Time | Scanhead Pos. | State | Tape Cell | Tape Cell | Tape Cell | |
|---|---|---|---|---|---|---|
| Input loaded here | (blank) | (blank) | ||||
| Each row contains the characters printed | ||||||
| Output printed here | (blank) | (blank) |
At : the input is stored at the beginning of the tape; the rest of the tape is blank (Davis convention). At : the output is the only content remaining, at the beginning of the tape (Davis convention).
The tape cell indexing starts at (first cell location); is reserved for the empty tape. Davis calls a snapshot where is the clock cycle (TIME) and is the data values of all active memory cells (SPACE), with all inactive cells assumed to be blank. In essence, each row is a Davis snapshot.
Table notes:
- The number of rows will not be more than since the problems under discussion are NP-Complete, which MUST compute in a polynomial amount of time.
- The reason we can start the first column definitively as “Tape Cell ” is that we are assuming Variant 2 - the tape is finite on one side and infinite in the other direction.
- The reason we only have to consider cells in Cook’s simulation is that we assumed Variant 1 (the quintuple machine), which can only advance one cell at a time per clock cycle, and NP has at most cycles for time.
(This diagram table is similar to the diagram found in the Wikipedia article on Cook’s theorem.)
Cook also realized these last three points and did so using the original quadruple Turing machine, but we are using the equivalent Turing machine variants to make the proof easier. According to Variant 1, the quintuple machine must move Left/Right at each step of the processing. Since the processing only has amount of time, the machine can move at most cells to the right. Thus, only a number of columns are in the table. Based on this, the table above only requires a amount of space, since there are rows (time) and columns (tape), and .
Consider the row for time . This is the status of the machine before processing begins. The input is loaded onto the beginning of the tape (the rest is blank by default). The scanhead pointer is set to position and the initial state is . Some of the states are designated final states and Turing machine processing must end in one of these. In the above table, "" appears as the state at the end of processing - this is not necessarily one state; it can be any number of states with the designated final status. Turing had a convention that when processing is completed, ONLY the output appears on the tape (at the beginning) and the scanhead pointer points to the beginning of the output. The rest of the tape at the end of processing is blank (similar to when the processing starts).
If the nondeterministic Turing machine solved a given NP-Complete problem (such as Job Scheduling), then the above table has ALL the information you would need to understand how that Turing machine processed the input data and obtained the output. How can we verify that the above table documented the correct pathway to obtain a solution?
To verify the initial and final rows is straightforward. The user placed the input, and the solution is by definition whatever is on the tape when the machine enters and remains on a final state. The question is how to verify the rows in between - to verify that given a row at time , the row at time follows according to the rules of the nondeterministic Turing machine. Equally important is that this verification occurs in a reasonable amount of time. In fact, the transition from row to row can be verified in a linear amount of time. The key to understanding this is that since the Turing machine interacts with one cell per time step, the change from one row to the next is VERY LITTLE.
Consider the tape at times and . The scanhead position at time is . The quintuple machine could only move at most one cell from the current position entering into time . Thus, the window of possible change in tape from time to time consists of at most 6 cells (cells , , and across both rows and ). But it is even simpler than that. Because we are assuming a quintuple machine, the printing of a symbol occurs BEFORE the moving of the scanhead. Therefore, if there is ANY change, it must occur in one place only - at scanhead position , which can have a different symbol in row than in row . ALL OTHER SYMBOLS on the tapes for rows and must be IDENTICAL. To verify this only requires time since, as proved above, the table only requires columns for the tape portion.
If :
- may be overwritten with
- scanhead moves to
All other cells: identical between rows and
So, a nondeterministic Turing machine “guesses” what should be in every square in the above table ( “guesses”). Due to nondeterminism, this machine “guesses” correctly, but you need to verify in amount of time. There are only a number of things to check to make sure that the table is consistent with a Turing machine and that each row follows directly from the previous row according to the transition rules. See the Cook-Levin theorem proof on Wikipedia for the exact boolean checklist needed to verify this. This reference indicates that encoding of the Turing machine into a Boolean Predicate requires time. Further research has shown that one could encode it into an equivalent Boolean Predicate in only time.
The above table would also be a proof for Savitch’s Theorem (1970) that , at least for any . (Savitch’s own proof extended to be in size.) What is ? It is the set of all problems that can be solved using memory of size regardless of how much time it takes. says the problems are solved deterministically and means they are solved nondeterministically. Regardless, the above table is size (or ), and since the quintuple Turing machine must print one cell at a time per clock step, the maximum space required by any given row is . Any row is representative of a nondeterministic Turing machine step and if one could “guess” the correct value for each cell, then the table as a whole gives a Turing algorithm for solving the problem. Now, since / does NOT care about time, one can brute-force fill in the table by deterministically considering every possible filling of the table (there are exponentially many - up to possible fillings - but since / does not care about time, this is acceptable) until a valid filling is found: one where the initial row has the correct input, the last row has the correct output, and each row is consistent with the previous/next row according to the rules of the nondeterministic Turing machine. Thus, this table implements and can be simulated by , at least for any .
Having discussed nondeterministic space, a second theorem is worthwhile to mention. The Immerman-Szelepcsényi theorem (1987, for which they shared the Gödel Prize in 1995) states that nondeterministic space complexity classes are closed under complementation. (Recall we have previously pointed out that knowing who is a member can be easier to determine than who is not a member.) They actually proved this independently. Mathematically, the theorem states that:
A specific case of this theorem (when ) is that , where and is its complement.