CS151 Complexity Theory Lecture 5 April 13, 2004.

Slides:



Advertisements
Similar presentations
Complexity Theory Lecture 6
Advertisements

Complexity Classes: P and NP
Lecture 24 MAS 714 Hartmut Klauck
INHERENT LIMITATIONS OF COMPUTER PROGRAMS CSci 4011.
CS151 Complexity Theory Lecture 3 April 6, CS151 Lecture 32 Introduction A motivating question: Can computers replace mathematicians? L = { (x,
Fall 2013 CMU CS Computational Complexity Lecture 5 Savich’s theorem, and IS theorems. These slides are mostly a resequencing of Chris Umans’ slides.
Theory of Computing Lecture 16 MAS 714 Hartmut Klauck.
Complexity 12-1 Complexity Andrei Bulatov Non-Deterministic Space.
1 L is in NP means: There is a language L’ in P and a polynomial p so that L 1 · L 2 means: For some polynomial time computable map r : 8 x: x 2 L 1 iff.
CS151 Complexity Theory Lecture 3 April 6, Nondeterminism: introduction A motivating question: Can computers replace mathematicians? L = { (x,
CS151 Complexity Theory Lecture 6 April 15, 2015.
CS151 Complexity Theory Lecture 5 April 13, 2015.
CS151 Complexity Theory Lecture 7 April 20, 2004.
FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY Read sections 7.1 – 7.3 of the book for next time.
Arithmetic Hardness vs. Randomness Valentine Kabanets SFU.
CS151 Complexity Theory Lecture 12 May 6, CS151 Lecture 122 Outline The Polynomial-Time Hierarachy (PH) Complete problems for classes in PH, PSPACE.
Computability and Complexity 32-1 Computability and Complexity Andrei Bulatov Boolean Circuits.
Submitted by : Estrella Eisenberg Yair Kaufman Ohad Lipsky Riva Gonen Shalom.
CS151 Complexity Theory Lecture 2 April 3, Time and Space A motivating question: –Boolean formula with n nodes –evaluate using O(log n) space?
Princeton University COS 433 Cryptography Fall 2005 Boaz Barak COS 433: Cryptography Princeton University Fall 2005 Boaz Barak Lecture 3: Computational.
CS151 Complexity Theory Lecture 1 March 30, 2004.
CS151 Complexity Theory Lecture 6 April 15, 2004.
Chapter 11 Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Umans Complexity Theory Lectures Lecture 2c: EXP Complete Problem: Padding and succinctness.
February 20, 2015CS21 Lecture 191 CS21 Decidability and Tractability Lecture 19 February 20, 2015.
CS151 Complexity Theory Lecture 2 April 1, CS151 Lecture 22 Time and Space A motivating question: –Boolean formula with n nodes –evaluate using.
1 Slides by Michael Lewin & Robert Sayegh. Adapted from Oded Goldreich’s course lecture notes by Vered Rosen and Alon Rosen.
Theory of Computing Lecture 19 MAS 714 Hartmut Klauck.
INHERENT LIMITATIONS OF COMPUTER PROGRAMS CSci 4011.
Complexity Theory Lecture 12 Lecturer: Moni Naor.
Theory of Computing Lecture 15 MAS 714 Hartmut Klauck.
February 18, 2015CS21 Lecture 181 CS21 Decidability and Tractability Lecture 18 February 18, 2015.
Computational Complexity Theory Lecture 2: Reductions, NP-completeness, Cook-Levin theorem Indian Institute of Science.
Theory of Computing Lecture 17 MAS 714 Hartmut Klauck.
Logic Circuits Chapter 2. Overview  Many important functions computed with straight-line programs No loops nor branches Conveniently described with circuits.
CSCI 2670 Introduction to Theory of Computing November 29, 2005.
1 Lower Bounds Lower bound: an estimate on a minimum amount of work needed to solve a given problem Examples: b number of comparisons needed to find the.
CS151 Complexity Theory Lecture 12 May 6, QSAT is PSPACE-complete Theorem: QSAT is PSPACE-complete. Proof: 8 x 1 9 x 2 8 x 3 … Qx n φ(x 1, x 2,
1 P P := the class of decision problems (languages) decided by a Turing machine so that for some polynomial p and all x, the machine terminates after at.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
Parallel computation Section 10.5 Giorgi Japaridze Theory of Computability.
Fall 2013 CMU CS Computational Complexity Lectures 8-9 Randomness, communication, complexity of unique solutions These slides are mostly a resequencing.
NP-Complete problems.
Umans Complexity Theory Lectures Lecture 2b: A P-Complete Problem.
Umans Complexity Theory Lectures Lecture 17: Natural Proofs.
Computability & Complexity II Chris Umans Caltech.
CS151 Complexity Theory Lecture 16 May 20, The outer verifier Theorem: NP  PCP[log n, polylog n] Proof (first steps): –define: Polynomial Constraint.
Fall 2013 CMU CS Computational Complexity Lecture 2 Diagonalization, 9/12/2013.
Umans Complexity Theory Lectures Lecture 6b: Formula Lower Bounds: -Best known formula lower bound for any NP function -Formula lower bound Ω(n 3-o(1)
Homework 8 Solutions Problem 1. Draw a diagram showing the various classes of languages that we have discussed and alluded to in terms of which class.
CSCI 2670 Introduction to Theory of Computing December 2, 2004.
CSCI 2670 Introduction to Theory of Computing December 7, 2005.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
The NP class. NP-completeness
Umans Complexity Theory Lectures
Umans Complexity Theory Lectures
Theory of Computability
Umans Complexity Theory Lectures
Intro to Theory of Computation
CS151 Complexity Theory Lecture 2 April 5, 2017.
CSE838 Lecture notes copy right: Moon Jung Chung
Chapter 34: NP-Completeness
Chapter 11 Limitations of Algorithm Power
CS21 Decidability and Tractability
CS21 Decidability and Tractability
Theory of Computability
Umans Complexity Theory Lectures
CS151 Complexity Theory Lecture 5 April 16, 2019.
CS151 Complexity Theory Lecture 4 April 12, 2019.
CS151 Complexity Theory Lecture 4 April 8, 2004.
Presentation transcript:

CS151 Complexity Theory Lecture 5 April 13, 2004

April 8, 2004CS151 Lecture 42 Introduction Power from an unexpected source? we know P ≠ EXP, which implies no poly- time algorithm for Succinct CVAL poly-size Boolean circuits for Succinct CVAL ??

April 8, 2004CS151 Lecture 43 Introduction …and the depths of our ignorance: Does NP have linear-size, log-depth Boolean circuits ??

April 8, 2004CS151 Lecture 44 Outline Boolean circuits and formulae uniformity and advice the NC hierarchy and parallel computation the quest for circuit lower bounds a lower bound for formulae

April 8, 2004CS151 Lecture 45 Boolean circuits C computes function f:{0,1} n  {0,1} in natural way –identify C with function f it computes circuit C –directed acyclic graph –nodes: AND (  ); OR (  ); NOT (  ); variables x i   x1x1 x2x2   x3x3 …xnxn 

April 8, 2004CS151 Lecture 46 Boolean circuits size = # gates depth = longest path from input to output formula (or expression): graph is a tree every function f:{0,1} n  {0,1} computable by a circuit of size at most O(n2 n ) –AND of n literals for each x such that f(x) = 1 –OR of up to 2 n such terms

April 8, 2004CS151 Lecture 47 Circuit families circuit works for specific input length we’re used to f:∑ *  {0,1} circuit family : a circuit for each input length C 1, C 2, C 3, … = “{C n }” “{C n } computes f” iff for all x C |x| (x) = f(x) “{C n } decides L”, where L is the language associated with f

April 8, 2004CS151 Lecture 48 Connection to TMs TM M running in time t(n) decides language L can build circuit family {C n } that decides L –size of C n = O(t(n) 2 ) –Proof: CVAL construction Conclude: L  P implies family of polynomial-size circuits that decides L

April 8, 2004CS151 Lecture 49 Connection to TMs other direction? A poly-size circuit family: –C n = (x 1   x 1 ) if M n halts –C n = (x 1   x 1 ) if M n loops decides (unary version of) HALT! oops…

April 8, 2004CS151 Lecture 410 Uniformity Strange aspect of circuit family: –can “encode” (potentially uncomputable) information in family specification solution: uniformity – require specification is simple to compute –Definition: circuit family {C n } is logspace uniform iff TM M outputs C n on input 1 n and runs in O(log n) space

April 8, 2004CS151 Lecture 411 Uniformity Theorem: P = languages decidable by logspace uniform, polynomial-size circuit families {C n }. Proof: –already saw (  ) –(  ) on input x, generate C |x|, evaluate it and accept iff output = 1

April 8, 2004CS151 Lecture 412 TMs that take advice family {C n } without uniformity constraint is called “non-uniform” regard “non-uniformity” as a limited resource just like time, space, as follows: –add read-only “advice” tape to TM M –M “decides L with advice A(n)” iff M(x, A(|x|)) accepts  x  L –note: A(n) depends only on |x|

April 8, 2004CS151 Lecture 413 TMs that take advice Definition: TIME(t(n))/f(n) = the set of those languages L for which: –there exists A(n) s.t. |A(n)| ≤ f(n) –TM M decides L with advice A(n) most important such class: P/poly =  k TIME(n k )/n k

April 8, 2004CS151 Lecture 414 TMs that take advice Theorem: L  P/poly iff L decided by family of (non-uniform) polynomial size circuits. Proof: –(  ) C n from CVAL construction; hardwire advice A(n) –(  ) define A(n) = description of C n ; on input x, TM simulates C n (x)

April 8, 2004CS151 Lecture 415 Approach to P/NP Believe NP  P –equivalent: “NP does not have uniform, polynomial-size circuits” Even believe NP  P/poly –equivalent: “NP (or, e.g. SAT) does not have polynomial-size circuits” –implies P ≠ NP –many believe: best hope for P ≠ NP

April 8, 2004CS151 Lecture 416 Parallelism uniform circuits allow refinement of polynomial time: circuit C depth  parallel time size  parallel work

April 8, 2004CS151 Lecture 417 Parallelism the NC (“Nick’s Class”) Hierarchy (of logspace uniform circuits): NC k = O(log k n) depth, poly(n) size NC =  k NC k captures “efficiently parallelizable problems” not realistic? overly generous OK for proving non-parallelizable

April 8, 2004CS151 Lecture 418 Matrix Multiplication what is the parallel complexity of this problem? –work = poly(n) –time = log k (n)? (which k?) n x n matrix A n x n matrix B = n x n matrix AB

April 8, 2004CS151 Lecture 419 Matrix Multiplication two details –arithmetic matrix multiplication… A = (a i, k ) B = (b k, j )(AB) i,j = Σ k (a i,k x b k, j ) … vs. Boolean matrix multiplication: A = (a i, k ) B = (b k, j )(AB) i,j =  k ( a i,k  b k, j ) –single output bit: to make matrix multiplication a language: on input A, B, (i, j) output (AB) i,j

April 8, 2004CS151 Lecture 420 Matrix Multiplication Boolean Matrix Multiplication is in NC 1 –level 1: compute n ANDS: a i,k  b k, j –next log n levels: tree of ORS –n 2 subtrees for all pairs (i, j) –select correct one and output

April 8, 2004CS151 Lecture 421 Boolean formulas and NC 1 Previous circuit is actually a formula. This is no accident: Theorem: L  NC 1 iff decidable by polynomial-size uniform family of Boolean formulas.

April 8, 2004CS151 Lecture 422 Boolean formulas and NC 1 Proof: –(  ) convert NC 1 circuit into formula recursively: note: logspace transformation (stack depth log n, stack record 1 bit – “left” or “right”)  

April 8, 2004CS151 Lecture 423 Boolean formulas and NC 1 –(  ) convert formula of size n into formula of depth O(log n) note: size ≤ 2 depth, so new formula has poly(n) size D CC1C1 1 C0C0 0 D    D key transformation

April 8, 2004CS151 Lecture 424 Boolean formulas and NC 1 –D any minimal subtree with size at least n/3 implies size(D) ≤ 2n/3 –define T(n) = maximum depth required for any size n formula –C 1, C 0, D all size ≤ 2n/3 T(n) ≤ T(2n/3) + 3 implies T(n) ≤ O(log n)

April 8, 2004CS151 Lecture 425 Relation to other classes Clearly NC  P –recall P  uniform poly-size circuits NC 1  L –on input x, compose logspace algorithms for: generating C |x| converting to formula FVAL

April 8, 2004CS151 Lecture 426 Relation to other classes NL  NC 2 : S-T-CONN  NC 2 –given G = (V, E), vertices s, t –A = adjacency matrix (with self-loops) –(A 2 ) i, j = 1 iff path of length ≤ 2 from node i to node j –(A n ) i, j = 1 iff path of length ≤ n from node i to node j –compute with depth log n tree of Boolean matrix multiplications, output entry s, t –log 2 n depth total

April 8, 2004CS151 Lecture 427 NC vs. P can every efficient algorithm be efficiently parallelized? NC = P P-complete problems least-likely to be parallelizable –if P-complete problem is in NC, then P = NC –Why? –we use logspace reductions to show problem P-complete; L in NC ?

April 8, 2004CS151 Lecture 428 NC vs. P can every uniform, poly-size Boolean circuit family be converted into a uniform, poly-size Boolean formula family? NC 1 = P ?

April 8, 2004CS151 Lecture 429 Lower bounds Recall: “NP does not have polynomial-size circuits” (NP  P/poly) implies P ≠ NP major goal: prove lower bounds on (non- uniform) circuit size for problems in NP –believe exponential –super-polynomial enough for P ≠ NP –best bound known: 4.5n –don’t even have super-polynomial bounds for problems in NEXP

April 8, 2004CS151 Lecture 430 Lower bounds lots of work on lower bounds for restricted classes of circuits –we’ll see two such lower bounds: formulas monotone circuits

April 8, 2004CS151 Lecture 431 Shannon’s counting argument frustrating fact: almost all functions require huge circuits Theorem (Shannon): With probability at least 1 – o(1), a random function f:{0,1} n  {0,1} requires a circuit of size Ω(2 n /n).

April 8, 2004CS151 Lecture 432 Shannon’s counting argument Proof (counting): –B(n) = 2 2 n = # functions f:{0,1} n  {0,1} –# circuits with n inputs + size s, is at most C(n, s) ≤ ((n+3)s 2 ) s s gates n+3 gate types2 inputs per gate

April 8, 2004CS151 Lecture 433 Shannon’s counting argument –C(n, c2 n /n) < ((2n)c 2 2 2n /n 2 ) (c2 n /n) < o(1)2 2c2 n < o(1)2 2 n (if c ≤ ½) –probability a random function has a circuit of size s = (½)2 n /n is at most C(n, s)/B(n) < o(1)

April 8, 2004CS151 Lecture 434 Shannon’s counting argument frustrating fact: almost all functions require huge formulas Theorem (Shannon): With probability at least 1 – o(1), a random function f:{0,1} n  {0,1} requires a formula of size Ω(2 n /log n).

April 8, 2004CS151 Lecture 435 Shannon’s counting argument Proof (counting): –B(n) = 2 2 n = # functions f:{0,1} n  {0,1} –# formulas with n inputs + size s, is at most F(n, s) ≤ 4 s 2 s (n+2) s 4 s binary trees with s internal nodes 2 gate choices per internal node n+2 choices per leaf

April 8, 2004CS151 Lecture 436 Shannon’s counting argument –F(n, c2 n /log n) < (16n) (c2 n /log n) < 16 (c2 n /log n) 2 (c2 n ) = (1 + o(1))2 (c2 n ) < o(1)2 2 n (if c ≤ ½) –probability a random function has a formula of size s = (½)2 n /log n is at most F(n, s)/B(n) < o(1)

April 8, 2004CS151 Lecture 437 Andreev function best lower bound for formulas: Theorem (Andreev, Hastad ‘93): the Andreev function requires ( , ,  )-formulas of size at least Ω(n 3-o(1) ).

April 8, 2004CS151 Lecture 438 Andreev function selector yiyi n-bit string y XOR... log n copies; n/log n bits each the Andreev function A(x,y) A:{0,1} 2n  {0,1}

April 8, 2004CS151 Lecture 439 Random restrictions key idea: given function f:{0,1} n  {0,1} restrict by ρ to get f ρ –ρ sets some variables to 0/1, others remain free R(n, єn) = set of restrictions that leave єn variables free Definition: L(f) = smallest ( , ,  ) formula computing f (measured as leaf-size)

April 8, 2004CS151 Lecture 440 Random restrictions observation: E ρ  R(n, єn) [L(f ρ )] ≤ єL(f) –each leaf survives with probability є may shrink more… –propogate constants Lemma (Hastad 93): for all f E ρ  R(n, єn) [L(f ρ )] ≤ O(є 2-o(1) L(f))

April 8, 2004CS151 Lecture 441 Hastad’s shrinkage result Proof of theorem: –Recall: there exists a function h:{0,1} log n  {0,1} for which L(h) > n/2loglog n. –hardwire truth table of that function into y to get A * (x) –apply random restriction from R(n, m = 2(log n)(ln log n)) to A * (x).

April 8, 2004CS151 Lecture 442 The lower bound Proof of theorem (continued): –probability given XOR is killed by restriction is probability that we “miss it” m times: (1 – (n/log n)/n) m ≤ (1 – 1/log n) m ≤ (1/e) 2ln log n ≤ 1/log 2 n –probability even one of XORs is killed by restriction is at most: log n(1/log 2 n) = 1/log n < ½.

April 8, 2004CS151 Lecture 443 The lower bound –(1): probability even one of XORs is killed by restriction is at most: log n(1/log 2 n) = 1/log n < ½. –(2): by Markov: Pr[ L(A * ρ ) > 2 E ρ  R(n, m) [L(A * ρ )] ] < ½. –Conclude: for some restriction ρ all XORs survive, and L(A * ρ ) ≤ 2 E ρ  R(n, m) [L(A * ρ )]

April 8, 2004CS151 Lecture 444 The lower bound Proof of theorem (continued): –if all XORs survive, can restrict formula further to compute hard function h may need to add  ’s L(h) = n /2loglogn ≤ L(A * ρ ) ≤ 2E ρ  R(n, m) [L(A * ρ )] ≤ O((m/n) 2-o(1) L(A * )) ≤ O( ((log n)(ln log n)/ n ) 2-o(1) L(A * ) ) –implies Ω(n 3-o(1) ) ≤ L(A * ) ≤ L(A).