Presentation is loading. Please wait.

Presentation is loading. Please wait.

Theory of Computational Complexity

Similar presentations


Presentation on theme: "Theory of Computational Complexity"— Presentation transcript:

1 Theory of Computational Complexity
Zhenwei Ding Takahashi Lab. M1 October 29, 2012

2 Outline Chapter 8: 8.4 The Classes L and NL 8.5 NL-Completeness
8.6 NL Equals coNL

3 8.4 The Classes L and NL How about the sublinear cases?
From previous presentations, we have learned time and space complexity bounds that are at least linear. P, NP and NP-complete PSPACE, NPSPACE and PSPACE-complete NP Savitch’s theorem the complexity class EXPTIME (sometimes called EXP) is the set of all decision problems solvable by a deterministic Turing machine in O(2f(n)) time, where f(n) is a polynomial function of n P PSPACE EXPTIME 𝑷⊆𝑵𝑷⊆𝑷𝑺𝑷𝑨𝑪𝑬=𝑵𝑷𝑺𝑷𝑨𝑪𝑬⊆𝑬𝑿𝑷𝑻𝑰𝑴𝑬 How about the sublinear cases?

4 8.4 The Classes L and NL Let’s see some examples!
Sublinear bounds for time complexity In time complexity, sublinear bounds are insufficient for reading the entire input, so they are seldom considered. Sublinear bounds for space complexity In space complexity, the machine could read the entire input and it does not have enough space to store the input. Let’s see some examples!

5 8.4 The Classes L and NL New problems emerge! Linear space complexity
Input = 8 , 8 storage units Sublinear space complexity Input = 8 , 4 storage units How can we cope with insufficient space New problems emerge!

6 New TM Model A Turing machine with two tapes: a read-only input tape and a read/write work tape Read-only input tape Read/write work tape The read-only tape gives you an image of all the input In a nutshell Sublinear space algorithms allow the computer to manipulate the data without storing all of it! For read-only tape, the input head can detect symbols but not change them, like a CD-ROM For work tape, it may be read and written in the usual way Only the cells scanned on the work tape contribute to the space complexity

7 8.4 The Classes L and NL DEFINITION 8.17
L is the class of languages that are decidable in logarithmic space on a deterministic Turing machine. In other words, L = SPACE(log n) NL is the class of languages that are decidable in logarithmic space on a nondeterministic Turing machine. In other words, NL = NSPACE(log n) DEFINITION 8.17

8 Why log 𝑛 ? Why we use 𝐥𝐨𝐠 𝒏 instead of 𝒏 or (𝐥𝐨𝐠 𝒏 ) 𝟐
Logarithmic space is large enough to solve a number of interesting computational problems Attractive mathematical properties such as robustness even when machine model and input encoding method change Pointers into the input may be represented in logarithmic space, for example, using binary number to store the pointer number Attractive mathematical properties: binary and octal number

9 8.4 The Classes L and NL EXAMPLE 8.18 A = {0k1k | 𝐤≥𝟎}
Linear Space Algorithm Scan across the tape and reject if a 0 is found to the right of a 1 Repeat 3 if both 0s and 1s remain on the tape Scan across the tape, crossing off a single 0 and a single 1 If 0s still remain after all the 1s have been crossed off, or if 1s still remain after all the 0s have been crossed off, reject. Otherwise, if neither 0s nor 1s remain on the tape, accept. Then let’s look at some examples of classes L and NL Comparison between linear space algorithm and sublinear space algorithm This algorithm needs linear space to record which positions have been crossed off.

10 8.4 The Classes L and NL EXAMPLE 8.18 Log Space Algorithm
Read-only input tape work tape Log Space Algorithm The machine counts the number of 0s and, separately, the number of 1s in binary on the work tape. Because the only space required is that used to record the two counters, and in binary, each counter uses only logarithmic space, so the algorithm runs in O(log n) space. In this algorithm, the first step is the same as previous linear space algorithm Therefore, A∈L

11 8.4 The Classes L and NL EXAMPLE 8.19 Linear Space Algorithm
PATH={<G, s, t>|G is a directed graph that has a directed path from s to t} s j i t k l Node l Linear Space Algorithm Place a mark on node s Repeat the following until no additional nodes are marked: Scan all the edges of G. If an edge (a, b) is found going from a marked node a to an unmarked node b, mark node b. It t is marked, accept. Otherwise reject. This algorithm needs linear space because almost all the nodes will be marked.

12 8.4 The Classes L and NL Thus, PATH ∈ NL EXAMPLE 8.19
PATH={<G, s, t>|G is a directed graph that has a directed path from s to t} s j i t k l Log Space Algorithm M is the number of edges in graph G Start from node s, nondeterministically guessing the path Record only the current node at each step on the work tape Select nondeterministically the next node from among those pointed at by the current node Accepts until it reaches t, rejects if more than m steps Thus, PATH ∈ NL

13 8.4 The Classes L and NL DEFINITION 8.20
Our earlier claim that any f(n) space bounded TM also runs in time 2O(f(n)) is no longer true for very small space bounds. For example, a Turing machine that uses O(1) (constant) space may run for n steps. DEFINITION 8.20 If M is a Turing machine that has a separate read-only input tape and w is an input, a configuration of M on w is a setting of the state, the work tape, and the positions of the two tape heads. The input w is not a part of the configuration of M on w. We need to find a new time bound for sublinear space algorithm

14 8.4 The Classes L and NL If M runs in f(n) space and w is an input of length n, the number of configurations of M on w is n2O(f(n)). For Turing machine M: c states g tape symbols f(n) positions for work tape head gf(n) strings that can appear on the work tape n positions for input head Therefore, an upper bound on the running time of M on w, is cnf(n)gf(n), or n2O(f(n)). With these new properties, let’s make some extensions of previous claims, first we are going to find the time complexity bound for the new configuration This expression is equal to Does not significantly change the result The blue one are for constant case, for at least log case we can just render it like the purple one For f(n) ≥ 𝐥𝐨𝐠 𝒏 , the upper bound on time complexity is 2O(f(n)) because n2O(f(n)) is 2O(f(n)) when f(n) ≥ 𝐥𝐨𝐠 𝒏 .

15 8.4 The Classes L and NL Let’s recall the proof of Savitch’s theorem, we can extend it to log space. The proof idea is roughly the same, but there are some differences: f(n) ≥ log 𝑛 instead of f(n) ≥ n TMs with a read-only input tape are used Instead of referring to configurations of N, we use configurations of N on w, Storing a configuration of N on w uses O(f(n)) space The remainder of the proof is the same. Then let’s take a look at Savitch’s theorem, Savitch’s theorem shows that we can convert nondeterministic TMs to deterministic TMs and increase the space complexity f(n) by only a squaring provided that f(n)>=n

16 8.4 The Classes L and NL A general picture of the proof, in order to keep track of which branch it is currently trying so that it is able to go on to the next one, we store the configuration. Each configuration needs O(f(n)) spaces. Recursion and yieldibility algorithm, Space of red semibranch is reused for green semibranch, how many times of recursion O(f(n)), t is the maximum time that the nondeterministic TM may use on any branch, t=2^O(f(n)), logt=O(f(n))

17 8.5 NL-Completeness PATH ∈ NL L ≠ NL? L = NL?
We cannot prove PATH is not in L Like the question of P = NP We also have the question of L=NL

18 8.5 NL-Completeness IF L ≠NL, THEN NL-complete ⊈L
Like complete languages for other complexity classes, NL-complete is, in a certain sense, the most difficult languages in NL. In order to better research this problem, NL-complete is introduced. IF L ≠NL, THEN NL-complete ⊈L IF NL-complete ⊆ L, THEN L = NL

19 8.5 NL-Completeness Our goal is to find an NL-complete language to be one which is in NL and to which any other language in NL is reducible. Previously, the word reducible means polynomial time reducible. However, all problems in NL are solvable in polynomial time. NL is contained in P Therefore, polynomial time reducibility is too strong for NL, we need a new type of reducibility.

20 8.5 NL-Completeness DEFINITION 8.21
A log space transducer is a Turing machine with a read-only input tape, a write-only output tape, and a read/write work tape. The work tape may contain O(log n) symbols. A log space transducer M computes a function f: ∗ → ∗ , where f(w) is the string remaining on the output tape after M halts when it is started with w on its input tape. We call f a log space computable function. Language A is log space reducible to language B, written A ≤ 𝐿 B, if A is mapping reducible to B by means of a log space computable function f. DEFINITION 8.21

21 8.5 NL-Completeness DEFINITION 8.22 A language B is NL-complete if
B ∈ NL, and Every A in NL is log space reducible to B. DEFINITION 8.22

22 8.5 NL-Completeness THEOREM 8.23 If A ≤ 𝐿 B and B ∈ L, then A ∈ L.
Proof: Similar to the approach we used in theorem 7.31, f(w) is the output of a log space algorithm from A to B on input w B uses f(w) as its input However, the storage required for f(w) may be too large to fit within the log space bound of B, so we need to modify this approach.

23 8.5 NL-Completeness COROLLARY 8.24
A’s machine MA computes individual symbols of f(w) as requested by B’s machine MB. In the simulation, MA keeps track of where MB’s input head would be on f(w). Every time MB moves, MA restarts the computation of f on w from the beginning and ignores all the output except for the desired location of f(w). 8.24 Because all languages in NL is log space reducible to NL-complete, If any NL-complete language is in L, then L=NL. COROLLARY 8.24

24 8.5 NL-Completeness THEOREM 8.25 PATH is NL-complete.
Proof idea: We have already proved PATH is in NL, so we only need to show that every language A in NL is log space reducible to PATH. The idea is to construct a graph that represents the computation of the nondeterministic log space TM for A. Nodes are mapped to configurations of NTM on input w. Directed edges are established if corresponding configurations can be yielded. Hence the machine accepts w whenever some path from the node corresponding to the start configuration leads to the node corresponding to the accepting configuration.

25 8.5 NL-Completeness First, supposing that NTM M decides A in O(log n) space. s j i t k l We are gonna see this in more details Given an input w, we construct <G, s, t> in log space, where G is a directed graph that contains a path s to t if and only if M accepts w.

26 8.5 NL-Completeness s j i t k l
All nodes map to configurations of M on w Edge ij represents that configuration j is one of the possible next configurations of M starting from i Node s is the start configuration of M on w Node t is the accepting configuration of M on w

27 8.5 NL-Completeness s i j l k t s j i t k l
When the Nondeterministic tree is like this, We can map the rejecting configuration to a node with no outgoing edges

28 8.5 NL-Completeness s i j l k t s i j l k t s j i t k l s j i t k l
This mapping reduces A to PATH because Whenever M accepts its input, some branch of its computation accepts, which corresponds to a path from the start configuration s to the accepting configuration t in G.

29 8.5 NL-Completeness s j i t k l
Space is needed for G’s nodes and G’s edges Nodes are configuration of M on w and they can be represented in 𝑐 log 𝑛 space for some constant c. We use a log space transducer, and it sequentially goes through all possible strings of length 𝑐 log 𝑛 and tests whether each is a legal configuration Log space is sufficient for verifying that a configuration c1 of M on w can yield configuration c2. So far we’ve established the mapping of any languages A in NL to PATH can work Remember language A is in NL 3. Because the transducer only needs to examine the actual tape contents under the head locations given in c1 to determine that M’s transition function would give configuration c2 as a result. The transducer tries all pairs(c1,c2) in turn to find which qualify as edge of G. Those that do are added to the output tape.

30 8.5 NL-Completeness COROLLARY 8.26 NL ⊆ P.
Theorem 8.25 shows that any language in NL is log space reducible to PATH. Let’s recall that a TM that uses space f(n) runs in time n2O(f(n)), The reduction algorithm uses space f(log n) Space f(log n) algorithms runs in polynomial time So any language in NL is polynomial time reducible to PATH, and by Theorem 7.14 PATH ∈ P. And every language is polynomial time reducible to a language in P is also in P. Space f(log n) algorithms runs in polynomial time, input log n for n2^O(f(n))

31 8.5 NL-Completeness Log space reducibility may get the appearance of somewhat restrictive, but in fact it is adequate for most reductions in complexity theory. For example, theorem 8.9 said every PSPACE problem is polynomial time reducible to TQBF. However, these reductions may be computed using only log space, and therefore TQBF may have log space reducibility. And corollary 9.6 will show you that NL ⊊PSPACE which means TQBF ∉ NL. PSPACE TQBF NL

32 8.6 NL Equals coNL DEFINITION OF co-NL
co-NL is the class of languages whose complements are in NL. DEFINITION OF co-NL Example: PATH={<G, s, t>|G is a directed graph that has a directed path from s to t} 𝑷𝑨𝑻𝑯 ={<G, s, t>|G is a directed graph that does not contain a directed path from s to t}

33 8.6 NL Equals coNL NP ≠ coNP NL ≠ coNL NL = coNL
The classes NP and coNP are generally believed to be different. NP ≠ coNP Then how about NL and coNL? NL = coNL NL ≠ coNL

34 8.6 NL Equals coNL THEOREM 8.27 NL = coNL.
PROOF IDEA: Because PATH is NL-complete, if 𝑃𝐴𝑇𝐻 is also in NL, every problem in coNL is also in NL. Assume that we have c which is the number of nodes in G that are reachable from s, we can use c to solve 𝑃𝐴𝑇𝐻 . Then we show how to compute c.

35 8.6 NL Equals coNL s i j l k t s j i t k l
TM M goes through all the n nodes of G and nondeterministically guesses whether each one is reachable from s. If a node u is guessed to be reachable, M attempts to verify this guess by guessing a path of length m or less from s to u. M rejects when a branch fails to verify the guess or a branch guesses that t is reachable. M counts the number of nodes that have been verified to be reachable. When a branch has gone through all the nodes, it checks if the number of nodes equals c, rejects if not, otherwise accepts. The verification is conducted by checking the branch of NTM tree

36 8.6 NL Equals coNL s j i t k l In other words, if M nondeterministically selects exactly c nodes reachable from s, not including t, and proves that each is reachable from s by guessing the path, M knows that the remaining nodes, including t, are not reachable, so it can accept.

37 8.6 NL Equals coNL s j i t k l For each i from 0 to m, we define Ai to be the collection of nodes that are at a distance of i or less from s. A0 = {s} 𝐴 𝑖 ⊆ 𝐴 𝑖+1 Am contains all nodes that are reachable from s Let ci be the number of nodes in Ai, then the desired value of c=cm. Then we show how to compute c

38 8.6 NL Equals coNL s j i t k l We calculate ci+1 from ci. The algorithm goes through all the nodes of G, determines whether each is a member of Ai+1, and counts the members.

39 8.6 NL Equals coNL s j i t k l To determine whether a node v is in Ai+1, we use an inner loop to go through all the nodes of G and guess whether each node is in Ai. Each positive guess is verified by guessing the path of length at most i from s. For each node u verified to be in Ai, the algorithm test whether (u, v) is an edge of G. If it is an edge, v is in Ai+1. Additionally, the number of nodes verified to be in Ai is counted. At the completion of the inner loop, if the total number of nodes verified to be in Ai is not ci, all Ai have not been found, so this computation branch rejects. If the count equals ci and v has not yet been shown to be in Ai+1, we conclude that it isn’t in Ai+1. Then we go on to the next v in the outer loop. Why we guess Ai is because we do not have the enough space to store it

40 8.6 NL Equals coNL Let c0 = 1 For i = 0 to m-1: Let ci+1 = 1
For each node v ≠ s in G: Let d = 0 For each node u in G: Nondeterministically either perform or skip these steps: Nondeterministically follow a path of length at most i from s and reject if it doesn’t end at u. Increment d If(u,v) is an edge of G, increment ci+1 and go to Stage 5 with the next v. If d ≠ ci, then reject 1A0 = {s} has 1 node 2Compute ci+1 from ci 3Ci+1 counts nodes in Ai+1 4Check if v is in Ai+1 5D recounts Ai 6Check if u is in Ai 9Verified that u is in Ai 10Verified that v is in Ai+1 11Check whether found all Ai

41 8.6 NL Equals coNL Let d = 0; For each node u in G: Nondetermisically either perform or skip these steps: Nondeterministically follow a path of length at most m from s and reject if it doesn’t end at u. If u = t, then reject Increment d If d ≠ cm, then reject, otherwise, accept. 12Cm now known, d recounts Am 13check if u is in Am 16found path from s to t 17verified that u is in Am 18check that found all of Am This algorithm only needs to store u, v, ci, ci+1, d, i, and a pointer to the head of a path(node s), at any given time. Hence it runs in log space.

42 8.6 NL Equals coNL L ⊆ NL = coNL ⊆ P ⊆ PSPACE L
We don’t know if any of these containments are proper. However, in Corollary 9.6 we prove NL ⊊ PSPACE, so either coNL ⊊ P or P ⊊ PSPACE must hold.


Download ppt "Theory of Computational Complexity"

Similar presentations


Ads by Google