Download presentation
Presentation is loading. Please wait.
Published byShawn Joseph Modified over 9 years ago
1
CS151 Complexity Theory Lecture 5 April 13, 2015
2
Introduction Power from an unexpected source? we know P ≠ EXP, which implies no poly- time algorithm for Succinct CVAL poly-size Boolean circuits for Succinct CVAL ?? Does NP have linear-size, log-depth Boolean circuits ?? 2
3
April 13, 2015 Outline Boolean circuits and formulas uniformity and advice the NC hierarchy and parallel computation the quest for circuit lower bounds a lower bound for formulas 3
4
April 13, 2015 Boolean circuits C computes function f:{0,1} n {0,1} in natural way –identify C with function f it computes circuit C –directed acyclic graph –nodes: AND ( ); OR ( ); NOT ( ); variables x i x1x1 x2x2 x3x3 …xnxn 4
5
April 13, 2015 Boolean circuits size = # gates depth = longest path from input to output formula (or expression): graph is a tree every function f:{0,1} n {0,1} computable by a circuit of size at most O(n2 n ) –AND of n literals for each x such that f(x) = 1 –OR of up to 2 n such terms 5
6
April 13, 2015 Circuit families circuit family : a circuit for each input length C 1, C 2, C 3, … = “{C n }” “{C n } computes f” iff for all x C |x| (x) = f(x) Definition: circuit family {C n } is logspace uniform iff TM M outputs C n on input 1 n and runs in O(log n) space 6
7
April 13, 2015 TMs that take advice family {C n } without uniformity constraint is called “non-uniform” regard “non-uniformity” as a limited resource just like time, space, as follows: –add read-only “advice” tape to TM M –M “decides L with advice A(n)” iff M(x, A(|x|)) accepts x L –note: A(n) depends only on |x| 7
8
April 13, 2015 TMs that take advice Definition: TIME(t(n))/f(n) = the set of those languages L for which: –there exists A(n) s.t. |A(n)| ≤ f(n) –TM M decides L with advice A(n) in time t(n) most important such class: P/poly = k TIME(n k )/n k 8
9
April 13, 2015 Uniformity Theorem: P = languages decidable by logspace uniform, polynomial-size circuit families {C n }. Proof: –already saw ( ) –( ) on input x, generate C |x|, evaluate it and accept iff output = 1 9
10
April 13, 2015 TMs that take advice Theorem: L P/poly iff L decided by family of (non-uniform) polynomial size circuits. Proof: –( ) C n from CVAL construction; hardwire advice A(n) –( ) define A(n) = description of C n ; on input x, TM simulates C |x| (x) 10
11
April 13, 2015 Approach to P/NP Believe NP P –equivalent: “NP does not have uniform, polynomial-size circuits” Even believe NP P/poly –equivalent: “NP (or, e.g. SAT) does not have polynomial-size circuits” –implies P ≠ NP –many believe: best hope for P ≠ NP 11
12
April 13, 2015 Parallelism uniform circuits allow refinement of polynomial time: circuit C depth parallel time size parallel work 12
13
April 13, 2015 Parallelism the NC (“Nick’s Class”) Hierarchy (of logspace uniform circuits): NC k = O(log k n) depth, poly(n) size NC = k NC k captures “efficiently parallelizable problems” not realistic? overly generous OK for proving non-parallelizable 13
14
April 13, 2015 Matrix Multiplication what is the parallel complexity of this problem? –work = poly(n) –time = log k (n)? (which k?) n x n matrix A n x n matrix B = n x n matrix AB 14
15
April 13, 2015 Matrix Multiplication two details –arithmetic matrix multiplication… A = (a i, k ) B = (b k, j )(AB) i,j = Σ k (a i,k x b k, j ) … vs. Boolean matrix multiplication: A = (a i, k ) B = (b k, j )(AB) i,j = k ( a i,k b k, j ) –single output bit: to make matrix multiplication a language: on input A, B, (i, j) output (AB) i,j 15
16
April 13, 2015 Matrix Multiplication Boolean Matrix Multiplication is in NC 1 –level 1: compute n ANDS: a i,k b k, j –next log n levels: tree of ORS –n 2 subtrees for all pairs (i, j) –select correct one and output 16
17
April 13, 2015 Boolean formulas and NC 1 Previous circuit is actually a formula. This is no accident: Theorem: L NC 1 iff decidable by polynomial-size uniform family of Boolean formulas. Note: we measure formula size by leaf-size. 17
18
April 13, 2015 Boolean formulas and NC 1 Proof: –( ) convert NC 1 circuit into formula recursively: note: logspace transformation (stack depth log n, stack record 1 bit – “left” or “right”) 18
19
April 13, 2015 Boolean formulas and NC 1 –( ) convert formula of size n into formula of depth O(log n) note: size ≤ 2 depth, so new formula has poly(n) size D CC1C1 1 C0C0 0 D D key transformation 19
20
April 13, 2015 Boolean formulas and NC 1 –D any minimal subtree with size at least n/3 implies size(D) ≤ 2n/3 –define T(n) = maximum depth required for any size n formula –C 1, C 0, D all size ≤ 2n/3 T(n) ≤ T(2n/3) + 3 implies T(n) ≤ O(log n) 20
21
April 13, 2015 Relation to other classes Clearly NC µ P –recall P uniform poly-size circuits NC 1 µ L –on input x, compose logspace algorithms for: generating C |x| converting to formula FVAL 21
22
April 13, 2015 Relation to other classes NL µ NC 2 : S-T-CONN NC 2 –given G = (V, E), vertices s, t –A = adjacency matrix (with self-loops) –(A 2 ) i, j = 1 iff path of length ≤ 2 from node i to node j –(A n ) i, j = 1 iff path of length ≤ n from node i to node j –compute with depth log n tree of Boolean matrix multiplications, output entry s, t –log 2 n depth total 22
23
April 13, 2015 NC vs. P can every efficient algorithm be efficiently parallelized? NC = P P-complete problems least-likely to be parallelizable –if P-complete problem is in NC, then P = NC –Why? we use logspace reductions to show problem P-complete; L in NC ? 23
24
April 13, 2015 NC vs. P can every uniform, poly-size Boolean circuit family be converted into a uniform, poly-size Boolean formula family? NC 1 = P ? 24
25
April 13, 2015 NC Hierarchy Collapse NC 1 µ NC 2 µ NC 3 µ NC 4 µ … µ NC Exercise if NC i = NC i+1, then NC = NC i (prove for non-uniform versions of classes) 25
26
April 13, 2015 Lower bounds Recall: “NP does not have polynomial-size circuits” (NP P/poly) implies P ≠ NP major goal: prove lower bounds on (non- uniform) circuit size for problems in NP –believe exponential –super-polynomial enough for P ≠ NP –best bound known: 4.5n –don’t even have super-polynomial bounds for problems in NEXP 26
27
April 13, 2015 Lower bounds lots of work on lower bounds for restricted classes of circuits –we’ll see two such lower bounds: formulas monotone circuits 27
28
April 13, 2015 Shannon’s counting argument frustrating fact: almost all functions require huge circuits Theorem (Shannon): With probability at least 1 – o(1), a random function f:{0,1} n {0,1} requires a circuit of size Ω(2 n /n). 28
29
April 13, 2015 Shannon’s counting argument Proof (counting): –B(n) = 2 2 n = # functions f:{0,1} n {0,1} –# circuits with n inputs + size s, is at most C(n, s) ≤ ((n+3)s 2 ) s s gates n+3 gate types2 inputs per gate 29
30
April 13, 2015 Shannon’s counting argument –C(n, c2 n /n) < ((2n)c 2 2 2n /n 2 ) (c2 n /n) < o(1)2 2c2 n < o(1)2 2 n (if c ≤ ½) –probability a random function has a circuit of size s = (½)2 n /n is at most C(n, s)/B(n) < o(1) C(n, s) ≤ ((n+3)s 2 ) s 30
31
April 13, 2015 Shannon’s counting argument frustrating fact: almost all functions require huge formulas Theorem (Shannon): With probability at least 1 – o(1), a random function f:{0,1} n {0,1} requires a formula of size Ω(2 n /log n). 31
32
April 13, 2015 Shannon’s counting argument Proof (counting): –B(n) = 2 2 n = # functions f:{0,1} n {0,1} –# formulas with n inputs + size s, is at most F(n, s) ≤ 4 s 2 s (2n) s 4 s binary trees with s internal nodes 2 gate choices per internal node 2n choices per leaf 32
33
April 13, 2015 Shannon’s counting argument –F(n, c2 n /log n) < (16n) (c2 n /log n) < 16 (c2 n /log n) 2 (c2 n ) = (1 + o(1))2 (c2 n ) < o(1)2 2 n (if c ≤ ½) –probability a random function has a formula of size s = (½)2 n /log n is at most F(n, s)/B(n) < o(1) F(n, s) ≤ 4 s 2 s (2n) s 33
34
April 13, 2015 Andreev function best formula lower bound for language in NP: Theorem (Andreev, Hastad ‘93): the Andreev function requires ( , , )- formulas of size at least Ω(n 3-o(1) ). 34
35
April 13, 2015 Andreev function selector yiyi n-bit string y XOR... log n copies; n/log n bits each the Andreev function A(x,y) A:{0,1} 2n {0,1} 35
36
April 13, 2015 Random restrictions key idea: given function f:{0,1} n {0,1} restrict by ρ to get f ρ –ρ sets some variables to 0/1, others remain free R(n, єn) = set of restrictions that leave єn variables free Definition: L(f) = smallest ( , , ) formula computing f (measured as leaf-size) 36
37
April 13, 2015 Random restrictions observation: E ρ R(n, єn) [L(f ρ )] ≤ єL(f) –each leaf survives with probability є may shrink more… –propogate constants Lemma (Hastad 93): for all f E ρ R(n, єn) [L(f ρ )] ≤ O(є 2-o(1) L(f)) 37
38
April 13, 2015 Hastad’s shrinkage result Proof of theorem: –Recall: there exists a function h:{0,1} log n {0,1} for which L(h) > n/2loglog n. –hardwire truth table of that function into y to get A * (x) –apply random restriction from R(n, m = 2(log n)(ln log n)) to A * (x). 38
39
April 13, 2015 The lower bound Proof of theorem (continued): –probability given XOR is killed by restriction is probability that we “miss it” m times: (1 – (n/log n)/n) m ≤ (1 – 1/log n) m ≤ (1/e) 2ln log n ≤ 1/log 2 n –probability even one of XORs is killed by restriction is at most: log n(1/log 2 n) = 1/log n < ½. 39
40
April 13, 2015 The lower bound –(1): probability even one of XORs is killed by restriction is at most: log n(1/log 2 n) = 1/log n < ½. –(2): by Markov: Pr[ L(A * ρ ) > 2 E ρ R(n, m) [L(A * ρ )] ] < ½. –Conclude: for some restriction ρ all XORs survive, and L(A * ρ ) ≤ 2 E ρ R(n, m) [L(A * ρ )] 40
41
April 13, 2015 The lower bound Proof of theorem (continued): –if all XORs survive, can restrict formula further to compute hard function h may need to add ’s L(h) = n /2loglogn ≤ L(A * ρ ) ≤ 2E ρ R(n, m) [L(A * ρ )] ≤ O((m/n) 2-o(1) L(A * )) ≤ O( ((log n)(ln log n)/ n ) 2-o(1) L(A * ) ) –implies Ω(n 3-o(1) ) ≤ L(A * ) ≤ L(A). 41
42
April 13, 201542 Clique CLIQUE = { (G, k) | G is a graph with a clique of size ≥ k } (clique = set of vertices every pair of which are connected by an edge) CLIQUE is NP-complete.
43
April 13, 201543 Circuit lower bounds We think that NP requires exponential-size circuits. Where should we look for a problem to attempt to prove this? Intuition: “hardest problems” – i.e., NP- complete problems
44
April 13, 201544 Circuit lower bounds Formally: –if any problem in NP requires super- polynomial size circuits –then every NP-complete problem requires super-polynomial size circuits –Proof idea: poly-time reductions can be performed by poly-size circuits using a variant of CVAL construction
45
April 13, 201545 Monotone problems Definition: monotone language = language L {0,1} * such that x L implies x’ L for all x ¹ x’. –flipping a bit of the input from 0 to 1 can only change the output from “no” to “yes” (or not at all)
46
April 13, 201546 Monotone problems some NP-complete languages are monotone –e.g. CLIQUE (given as adjacency matrix): –others: HAMILTON CYCLE, SET COVER… –but not SAT, KNAPSACK…
47
April 13, 201547 Monotone circuits A restricted class of circuits: Definition: monotone circuit = circuit whose gates are ANDs ( ), ORs ( ), but no NOTs can compute exactly the monotone fns. –monotone functions closed under AND, OR
48
April 13, 201548 Monotone circuits A question: Do all poly-time computable monotone functions have poly-size monotone circuits? –recall: true in non-monotone case
49
April 13, 201549 Monotone circuits A monotone circuit for CLIQUE n,k Input: graph G = (V,E) as adj. matrix, |V|=n –variable x i,j for each possible edge (i,j) ISCLIQUE(S) = monotone circuit that = 1 iff S V is a clique: i,j S x i,j CLIQUE n,k computed by monotone circuit: S V, |S| = k ISCLIQUE(S)
50
April 13, 201550 Monotone circuits Size of this monotone circuit for CLIQUE n,k : when k = n 1/4, size is approximately:
51
April 13, 201551 Monotone circuits Theorem (Razborov 85): monotone circuits for CLIQUE n,k with k = n 1/4 must have size at least 2 Ω(n 1/8 ). Proof: –next lecture
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.