CS151 Complexity Theory Lecture 5 April 16, 2019.

Slides:



Advertisements
Similar presentations
Complexity Theory Lecture 6
Advertisements

Lecture 24 MAS 714 Hartmut Klauck
CS151 Complexity Theory Lecture 3 April 6, CS151 Lecture 32 Introduction A motivating question: Can computers replace mathematicians? L = { (x,
Great Theoretical Ideas in Computer Science for Some.
Theory of Computing Lecture 16 MAS 714 Hartmut Klauck.
CS151 Complexity Theory Lecture 3 April 6, Nondeterminism: introduction A motivating question: Can computers replace mathematicians? L = { (x,
CS151 Complexity Theory Lecture 5 April 13, 2015.
CS151 Complexity Theory Lecture 5 April 13, 2004.
FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY Read sections 7.1 – 7.3 of the book for next time.
CS151 Complexity Theory Lecture 12 May 6, CS151 Lecture 122 Outline The Polynomial-Time Hierarachy (PH) Complete problems for classes in PH, PSPACE.
Computability and Complexity 32-1 Computability and Complexity Andrei Bulatov Boolean Circuits.
Submitted by : Estrella Eisenberg Yair Kaufman Ohad Lipsky Riva Gonen Shalom.
CS151 Complexity Theory Lecture 2 April 3, Time and Space A motivating question: –Boolean formula with n nodes –evaluate using O(log n) space?
CS151 Complexity Theory Lecture 6 April 15, 2004.
February 20, 2015CS21 Lecture 191 CS21 Decidability and Tractability Lecture 19 February 20, 2015.
CS151 Complexity Theory Lecture 2 April 1, CS151 Lecture 22 Time and Space A motivating question: –Boolean formula with n nodes –evaluate using.
1 Slides by Michael Lewin & Robert Sayegh. Adapted from Oded Goldreich’s course lecture notes by Vered Rosen and Alon Rosen.
Complexity Theory Lecture 12 Lecturer: Moni Naor.
February 18, 2015CS21 Lecture 181 CS21 Decidability and Tractability Lecture 18 February 18, 2015.
Computational Complexity Theory Lecture 2: Reductions, NP-completeness, Cook-Levin theorem Indian Institute of Science.
Theory of Computing Lecture 17 MAS 714 Hartmut Klauck.
CSCI 2670 Introduction to Theory of Computing November 29, 2005.
1 Lower Bounds Lower bound: an estimate on a minimum amount of work needed to solve a given problem Examples: b number of comparisons needed to find the.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
Parallel computation Section 10.5 Giorgi Japaridze Theory of Computability.
NP-COMPLETE PROBLEMS. Admin  Two more assignments…  No office hours on tomorrow.
NP-Complete problems.
Umans Complexity Theory Lectures Lecture 2b: A P-Complete Problem.
Computability & Complexity II Chris Umans Caltech.
Umans Complexity Theory Lectures Lecture 6b: Formula Lower Bounds: -Best known formula lower bound for any NP function -Formula lower bound Ω(n 3-o(1)
Homework 8 Solutions Problem 1. Draw a diagram showing the various classes of languages that we have discussed and alluded to in terms of which class.
CSCI 2670 Introduction to Theory of Computing December 2, 2004.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
The NP class. NP-completeness
P & NP.
Umans Complexity Theory Lectures
Computational Complexity Theory
From Classical Proof Theory to P vs. NP
CSCI 2670 Introduction to Theory of Computing
Umans Complexity Theory Lectures
L is in NP means: There is a language L’ in P and a polynomial p so that L1 ≤ L2 means: For some polynomial time computable map r :  x: x  L1 iff.
Great Theoretical Ideas in Computer Science
Theory of Computability
Circuit Lower Bounds A combinatorial approach to P vs NP
Umans Complexity Theory Lectures
CS21 Decidability and Tractability
Intro to Theory of Computation
Intro to Theory of Computation
CS151 Complexity Theory Lecture 2 April 5, 2017.
Intro to Theory of Computation
CSE838 Lecture notes copy right: Moon Jung Chung
CS151 Complexity Theory Lecture 3 April 10, 2017.
Chapter 34: NP-Completeness
Chapter 11 Limitations of Algorithm Power
Finite Model Theory Lecture 6
CS154, Lecture 13: P vs NP.
CS21 Decidability and Tractability
CS21 Decidability and Tractability
Theory of Computability
Umans Complexity Theory Lectures
CS21 Decidability and Tractability
CS21 Decidability and Tractability
CS151 Complexity Theory Lecture 12 May 10, 2019.
CS151 Complexity Theory Lecture 1 April 2, 2019.
Instructor: Aaron Roth
CSE 105 theory of computation
Instructor: Aaron Roth
Instructor: Aaron Roth
CS151 Complexity Theory Lecture 4 April 12, 2019.
CS151 Complexity Theory Lecture 4 April 8, 2004.
Presentation transcript:

CS151 Complexity Theory Lecture 5 April 16, 2019

Does NP have linear-size, log-depth Boolean circuits ?? Introduction Power from an unexpected source? we know P ≠ EXP, which implies no poly-time algorithm for Succinct CVAL poly-size Boolean circuits for Succinct CVAL ?? Does NP have linear-size, log-depth Boolean circuits ?? April 16, 2019

Outline Boolean circuits and formulas uniformity and advice the NC hierarchy and parallel computation the quest for circuit lower bounds a lower bound for formulas April 16, 2019

Boolean circuits circuit C ∧ circuit C directed acyclic graph nodes: AND (∧); OR (∨); NOT (¬); variables xi ∨ ∧ ∧ ∨ ¬ x1 x2 x3 … xn C computes function f:{0,1}n → {0,1} in natural way identify C with function f it computes April 16, 2019

Boolean circuits size = # gates depth = longest path from input to output formula (or expression): graph is a tree every function f:{0,1}n → {0,1} computable by a circuit of size at most O(n2n) AND of n literals for each x such that f(x) = 1 OR of up to 2n such terms April 16, 2019

Circuit families circuit works for specific input length we’re used to f:∑*→ {0,1} circuit family : a circuit for each input length C1, C2, C3, … = “{Cn}” “{Cn} computes f” iff for all x C|x|(x) = f(x) “{Cn} decides L”, where L is the language associated with f April 16, 2019

Connection to TMs given TM M running in time t(n) decides language L can build circuit family {Cn} that decides L size of Cn = O(t(n)2) Proof: CVAL construction Conclude: L ∈ P implies family of polynomial-size circuits that decides L April 16, 2019

Connection to TMs other direction? A poly-size circuit family: Cn = (x1 ∨ ¬ x1) if Mn halts Cn = (x1 ∧ ¬ x1) if Mn loops decides (unary version of) HALT! oops… April 16, 2019

Uniformity Strange aspect of circuit family: can “encode” (potentially uncomputable) information in family specification solution: uniformity – require specification is simple to compute Definition: circuit family {Cn} is logspace uniform iff TM M outputs Cn on input 1n and runs in O(log n) space April 16, 2019

Uniformity Theorem: P = languages decidable by logspace uniform, polynomial-size circuit families {Cn}. Proof: already saw (⇒) (⇐) on input x, generate C|x|, evaluate it and accept iff output = 1 April 16, 2019

M(x, A(|x|)) accepts ⇔ x ∈ L TMs that take advice family {Cn} without uniformity constraint is called “non-uniform” regard “non-uniformity” as a limited resource just like time, space, as follows: add read-only “advice” tape to TM M M “decides L with advice A(n)” iff M(x, A(|x|)) accepts ⇔ x ∈ L note: A(n) depends only on |x| April 16, 2019

P/poly = ∪k TIME(nk)/nk TMs that take advice Definition: TIME(t(n))/f(n) = the set of those languages L for which: there exists A(n) s.t. |A(n)| ≤ f(n) TM M decides L with advice A(n) in time t(n) most important such class: P/poly = ∪k TIME(nk)/nk April 16, 2019

TMs that take advice Theorem: L ∈ P/poly iff L decided by family of (non-uniform) polynomial size circuits. Proof: (⇒) Cn from CVAL construction; hardwire advice A(n) (⇐) define A(n) = description of Cn; on input x, TM simulates C|x|(x) April 16, 2019

Approach to P/NP Believe NP ≠ P Even believe NP ⊂ P/poly equivalent: “NP does not have uniform, polynomial-size circuits” Even believe NP ⊂ P/poly equivalent: “NP (or, e.g. SAT) does not have polynomial-size circuits” implies P ≠ NP many believe: best hope for P ≠ NP April 16, 2019

Parallelism uniform circuits allow refinement of polynomial time: depth ≡ parallel time circuit C size ≡ parallel work April 16, 2019

NCk = O(logk n) depth, poly(n) size Parallelism the NC (“Nick’s Class”) Hierarchy (of logspace uniform circuits): NCk = O(logk n) depth, poly(n) size NC = ∪k NCk captures “efficiently parallelizable problems” not realistic? overly generous OK for proving non-parallelizable April 16, 2019

Matrix Multiplication n x n matrix A n x n matrix B n x n matrix AB = what is the parallel complexity of this problem? work = poly(n) time = logk(n)? (which k?) April 16, 2019

Matrix Multiplication two details arithmetic matrix multiplication… A = (ai, k) B = (bk, j) (AB)i,j = Σk (ai,k x bk, j) … vs. Boolean matrix multiplication: A = (ai, k) B = (bk, j) (AB)i,j = ∨k (ai,k ∧ bk, j) single output bit: to make matrix multiplication a language: on input A, B, (i, j) output (AB)i,j April 16, 2019

Matrix Multiplication Boolean Matrix Multiplication is in NC1 level 1: compute n ANDS: ai,k ∧ bk, j next log n levels: tree of ORS n2 subtrees for all pairs (i, j) select correct one and output April 16, 2019

Boolean formulas and NC1 Previous circuit is actually a formula. This is no accident: Theorem: L ∈ NC1 iff decidable by polynomial-size uniform* family of Boolean formulas. Note: we measure formula size by leaf-size. * DSPACE(log2 n)-uniform April 16, 2019

Boolean formulas and NC1 Proof: (⇒) convert NC1 circuit into formula recursively: note: logspace transformation (stack depth log n, stack record 1 bit – “left” or “right”) ∧ ∧ ⇒ April 16, 2019

Boolean formulas and NC1 (⇐) convert formula of size n into formula of depth O(log n) note: size ≤ 2depth, so new formula has poly(n) size ∨ key transformation ∧ ∧ D ¬ C C1 C0 D 1 D April 16, 2019

Boolean formulas and NC1 D any minimal subtree with size at least n/3 implies size(D) ≤ 2n/3 define T(n) = maximum depth required for any size n formula C1, C0, D all size ≤ 2n/3 T(n) ≤ T(2n/3) + 3 implies T(n) ≤ O(log n) April 16, 2019

Relation to other classes Clearly NC ⊆ P recall P ≡ uniform poly-size circuits NC1 ⊆ L on input x, compose logspace algorithms for: generating C|x| converting to formula FVAL April 16, 2019

Relation to other classes NL ⊆ NC2: S-T-CONN ∈ NC2 given G = (V, E), vertices s, t A = adjacency matrix (with self-loops) (A2)i, j = 1 iff path of length ≤ 2 from node i to node j (An)i, j = 1 iff path of length ≤ n from node i to node j compute with depth log n tree of Boolean matrix multiplications, output entry s, t log2 n depth total April 16, 2019

NC vs. P can every efficient algorithm be efficiently parallelized? NC = P P-complete problems least-likely to be parallelizable if P-complete problem is in NC, then P = NC Why? we use logspace reductions to show problem P-complete; L in NC ? April 16, 2019

NC vs. P can every uniform, poly-size Boolean circuit family be converted into a uniform, poly-size Boolean formula family? NC1 = P ? April 16, 2019

NC Hierarchy Collapse NC1 ⊆ NC2 ⊆ NC3 ⊆ NC4 ⊆ … ⊆ NC Exercise if NCi = NCi+1, then NC = NCi (prove for non-uniform versions of classes) April 16, 2019

Lower bounds Recall: “NP does not have polynomial-size circuits” (NP ⊆ P/poly) implies P ≠ NP major goal: prove lower bounds on (non-uniform) circuit size for problems in NP believe exponential super-polynomial enough for P ≠ NP best bound known: (5-o(1))⋅n don’t even have super-polynomial bounds for problems in NEXP April 16, 2019

Lower bounds lots of work on lower bounds for restricted classes of circuits we’ll see two such lower bounds: formulas monotone circuits April 16, 2019

Shannon’s counting argument frustrating fact: almost all functions require huge circuits Theorem (Shannon): With probability at least 1 – o(1), a random function f:{0,1}n → {0,1} requires a circuit of size Ω(2n/n). April 16, 2019

Shannon’s counting argument Proof (counting): B(n) = 22n = # functions f:{0,1}n → {0,1} # circuits with n inputs + size s, is at most C(n, s) ≤ ((n+3)s2)s s gates n+3 gate types 2 inputs per gate April 16, 2019

Shannon’s counting argument C(n, s) ≤ ((n+3)s2)s C(n, c2n/n) < ((2n)c222n/n2)(c2n/n) < o(1)22c2n < o(1)22n (if c ≤ ½) probability a random function has a circuit of size s = (½)2n/n is at most C(n, s)/B(n) < o(1) April 16, 2019

Shannon’s counting argument frustrating fact: almost all functions require huge formulas Theorem (Shannon): With probability at least 1 – o(1), a random function f:{0,1}n → {0,1} requires a formula of size Ω(2n/log n). April 16, 2019

Shannon’s counting argument Proof (counting): B(n) = 22n = # functions f:{0,1}n → {0,1} # formulas with n inputs + size s, is at most F(n, s) ≤ 4s2s(2n)s 2n choices per leaf 4s binary trees with s internal nodes 2 gate choices per internal node April 16, 2019

Shannon’s counting argument F(n, s) ≤ 4s2s(2n)s F(n, c2n/log n) < (16n)(c2n/log n) < 16(c2n/log n)2(c2n) = (1 + o(1))2(c2n) < o(1)22n (if c ≤ ½) probability a random function has a formula of size s = (½)2n/log n is at most F(n, s)/B(n) < o(1) April 16, 2019

Andreev function best formula lower bound for language in NP: Theorem (Andreev, Hastad ‘93): the Andreev function requires (∧,∨,¬)-formulas of size at least Ω(n3-o(1)). April 16, 2019

the Andreev function A(x,y) A:{0,1}2n → {0,1} yi selector n-bit string y . . . XOR XOR log n copies; n/log n bits each the Andreev function A(x,y) A:{0,1}2n → {0,1} April 16, 2019

Random restrictions key idea: given function f:{0,1}n → {0,1} restrict by ρ to get fρ ρ sets some variables to 0/1, others remain free R(n, єn) = set of restrictions that leave єn variables free Definition: L(f) = smallest (∧,∨,¬) formula computing f (measured as leaf-size) April 16, 2019

Random restrictions observation: Eρ←R(n, єn)[L(fρ)] ≤ єL(f) each leaf survives with probability є may shrink more… propogate constants Lemma (Hastad 93): for all f Eρ←R(n, єn)[L(fρ)] ≤ O(є2-o(1)L(f)) April 16, 2019