Presentation is loading. Please wait.

Presentation is loading. Please wait.

Average-case Complexity Luca Trevisan UC Berkeley.

Similar presentations


Presentation on theme: "Average-case Complexity Luca Trevisan UC Berkeley."— Presentation transcript:

1 Average-case Complexity Luca Trevisan UC Berkeley

2 Distributional Problem P computational problem – e.g. SAT D distribution over inputs – e.g. n vars 10n clauses

3 Positive Results: Algorithm that solves P efficiently on most inputs – Interesting when P useful problem, D distribution arising “in practice” Negative Results: If, then no such algorithm – P useful, D natural guide algorithm design – Manufactured P,D, still interesting for crypto, derandomization

4 Positive Results: Algorithm that solves P efficiently on most inputs – Interesting when P useful problem, D distribution arising “in practice” Negative Results: If, then no such algorithm – P useful, D natural guide algorithm design – Manufactured P,D, still interesting for crypto, derandomization

5 Holy Grail If there is algorithm A that solves P efficiently on most inputs from D Then there is an efficient worst-case algorithm for [the complexity class] P [belongs to]

6 Part (1) In which the Holy Grail proves elusive

7 The Permanent Perm (M) :=    i M(i,  (i)) Perm() is #P-complete Lipton (1990): If there is algorithm that solves Perm() efficiently on most random matrices, Then there is an algorithm that solves it efficiently on all matrices (and BPP=#P)

8 Lipton’s Reduction Suppose operations are over finite field of size >n A is good-on-average algorithm (wrong on < 1/(10(n+1)) fraction of matrices) Given M, pick random X, compute A(M+X), A(M+2X),…,A(M+(n+1)X) Whp the same as Perm(M+X),Perm(M+2X),…,Perm(M+(n+1)X)

9 Lipton’s Reduction Given Perm(M+X),Perm(M+2X),…,Perm(M+(n+1)X) Find univariate degree-n polynomial p such that p(t) = Perm(M+tX) for all t Output p(0)

10 Improvements / Generalizations Can handle constant fraction of errors [Gemmel-Sudan] Works for PSPACE-complete, EXP-complete,… [Feigenbaum-Fortnow, Babai-Fortnow-Nisan-Wigderson] Encode the problem as a polynomial

11 Strong Average-Case Hardness [Impagliazzo, Impagliazzo-Wigderson] Manufacture problems in E, EXP, such that – Size-t circuit correct on ½ + 1/t inputs implies – Size poly(t) circuit correct on all inputs Motivation: [Nisan-Wigderson] P=BPP if there is problem in E of exponential average-case complexity

12 Strong Average-Case Hardness [Impagliazzo, Impagliazzo-Wigderson] Manufacture problems in E, EXP, such that – Size-t circuit correct on ½ + 1/t inputs implies – Size poly(t) circuit correct on all inputs Motivation: [Impagliazzo-Wigderson] P=BPP if there is problem in E of exponential average worst-case complexity

13 Open Question 1 Suppose there are worst-case intractable problems in NP Are there average-case intractable problems?

14 Strong Average-Case Hardness [Impagliazzo, Impagliazzo-Wigderson] Manufacture problems in E, EXP, such that – Size-t circuit correct on ½ + 1/t inputs implies – Size poly(t) circuit correct on all inputs [Sudan-T-Vadhan] – IW result can be seen as coding-theoretic – Simpler proof by explicitly coding-theoretic ideas

15 Encoding Approach Viola proves that an error-correcting code cannot be computed in AC0 The exponential-size error-correcting code computation not possible in PH

16 Problem-specific Approaches? [Ajtai] Proves that there is a lattice problem such that: – If there is efficient average-case algorithm – There is efficient worst-case approximation algorithm

17 Ajtai’s Reduction Lattice Problem – If there is efficient average-case algorithm – There is efficient worst-case approximation algorithm The approximation problem is in NP  coNP Not NP-hard

18 Holy Grail Distributional Problem: – If there is efficient average-case algorithm – P=NP (or NP in BPP, or NP has poly-size circuits,…) Already seen: no “encoding” approach works Can extensions of Ajtai’s approach work?

19 A Class of Approaches L problem in NP, D distribution of inputs R reduction of SAT to : Given instance f of SAT, – R produces instances x 1,…,x k of L, each distributed according to D – Given L(x 1 ),…,L(x 1 ), R is able to decide f If there is good-on-average algorithn for, we solve SAT in polynomial time [cf. Lipton’s work on Permanent]

20 A Class of Approaches L,W problems in NP, D (samplable) distribution of inputs R reduction of W to Given instance w of W, – R produces instances x 1,…,x k of L, each distributed according to D – Given L(x 1 ),…,L(x 1 ), R is able to decide w If there is good-on-average algorithm for, we solve W in polynomial time; Can W be NP-complete?

21 A Class of Approaches Given instance w of W, – R produces instances x 1,…,x k of L, each distributed according to D – Given L(x 1 ),…,L(x 1 ), R is able to decide w Given good-on-average algorithm for, we solve W in polynomial time; If we have such reduction, and W is NP-complete, we have Holy Grail! Feigenbaum-Fortnow: W is in “coNP”

22 Feigenbaum-Fortnow Given instance w of W, – R produces instances x 1,…,x k of L, each distributed according to D – Given L(x 1 ),…,L(x 1 ), R is able to decide w Using R, Feigenbaum-Fortnow design a 2-round interactive proof with advice for coW Given w, Prover convinces Verifier that R rejects w after seeing L(x 1 ),…,L(x 1 )

23 Feigenbaum-Fortnow Given instance w of W, – R produces instances x of L distributed as in D – w in L iff x in L Suppose we know Pr D [ x in L]= ½ V P w R(w) = x 1 R(w) = x 2... R(w) = x m x 1, x 2,..., x m (Yes,w 1 ),No,..., (Yes, w m ) Accept iff all simulations of R reject and m/2 +/- sqrt(m) answers are certified Yes

24 Feigenbaum-Fortnow Given instance w of W, p:= Pr[ x i in L] – R produces instances x 1,…,x k of L, each distrib. according to D – Given L(x 1 ),…,L(x k ), R is able to decide w V w R(w) -> x 1 1,…,x k 1... R(w) -> x 1 m,…,x k m P x 1 1,…,x k m (Yes,w 1 1 ),…,NO Accept iff -pkm +/- sqrt(pkm) YES with certificates -R rejects in each case

25 Generalizations Bogdanov-Trevisan: arbitrary non-adaptive reductions Main Open Question: What happens with adaptive reductions?

26 Open Question 1 Prove the following: Suppose: W,L are in NP, D is samplable distribution, R is poly-time reduction such that – If A solves on 1-1/poly(n) frac of inputs – Then R with oracle A solves W on all inputs Then W is in “coNP”

27 By the Way Probably impossible by current techniques: If NP not contained in BPP There is a samplable distribution D and an NP problem L Such that is hard on average

28 By the Way Probably impossible by current techniques: If NP not contained in BPP There is a samplable distribution D and an NP problem L Such that for every efficient A A makes many mistakes solving L on D

29 By the Way Probably impossible by current techniques: If NP not contained in BPP There is a samplable distribution D and an NP problem L Such that for every efficient A A makes many mistakes solving L on D [Guttfreund-Shaltiel-TaShma] Prove: If NP not contained in BPP For every efficient A There is a samplable distribution D Such that A makes many mistakes solving SAT on D

30 Part (2) In which we amplify average-case complexity and we discuss a short paper

31 Revised Goal Proving “If NP contains worst-case intractable problems, then NP contains average-case intractable problems” Might be impossible Average-case intractability comes in different quantitative degrees Equivalence?

32 Average-Case Hardness What does it mean for to be hard-on-average? Suppose A is efficient algorithm Sample x ~ D Then A(x) is noticeably likely to be wrong How noticeably?

33 Average-Case Hardness Amplification Ideally: If there is, L in NP, such that every poly-time algorithm (poly-size circuit) makes > 1/poly(n) mistakes Then there is, L’ in NP, such that every poly-time algorithm (poly-size circuit) makes > ½ - 1/poly(n) mistakes

34 Amplification “Classical” approach: Yao’s XOR Lemma Suppose: for every efficient A Pr D [ A(x) = L(x) ] < 1-  Then: for every efficient A’ Pr D [ A’(x 1,…,x k ) = L(x 1 ) xor … xor L(x k ) ] < ½ + (1 - 2  ) k + negligible

35 Yao’s XOR Lemma Suppose: for every efficient A Pr D [ A(x) = L(x) ] < 1-  Then: for every efficient A’ Pr D [ A’(x 1,…,x k ) = L(x 1 ) xor … xor L(x k ) ] < ½ + (1 - 2  ) k + negligible Note: computing L(x 1 ) xor … xor L(x k ) need not be in NP, even if L is in NP

36 O’Donnell Approach Suppose: for every efficient A Pr D [ A(x) = L(x) ] < 1-  Then: for every efficient A’ Pr D [ A’(x 1,…,x k ) = g(L(x 1 ), …, L(x k )) ] < ½ + small(k,  ) For carefully chosen monotone function g Now computing g(L(x 1 ),…, L(x k )) is in NP, if L is in NP

37 Amplification (Circuits) Ideally: If there is, L in NP, such that every poly-time algorithm (poly-size circuit) makes > 1/poly(n) mistakes Then there is, L’ in NP, such that every poly-time algorithm (poly-size circuit) makes > ½ - 1/poly(n) mistakes Achieved by [O’Donnell, Healy-Vadhan-Viola] for poly-size circuits

38 Amplification (Algorithms) If there is, L in NP, such that every poly- time algorithm makes > 1/poly(n) mistakes Then there is, L’ in NP, such that every poly-time algorithm makes > ½ - 1/polylog(n) mistakes [T] [Impagliazzo-Jaiswal-Kabanets-Wigderson] ½ - 1/poly(n) but for P NP||

39 Open Question 2 Prove: If there is, L in NP, such that every poly-time algorithm makes > 1/poly(n) mistakes Then there is, L’ in NP, such that every poly-time algorithm makes > ½ - 1/poly(n) mistakes

40 Completeness Suppose we believe there is L in NP, D distribution, such that is hard Can we point to a specific problem C such that is also hard?

41 Completeness Suppose we believe there is L in NP, D distribution, such that is hard Can we point to a specific problem C such that is also hard? Must put restriction on D, otherwise assumption is the same as P != NP

42 Side Note Let K be distribution such that x has probability proportional to 2 -K(x) Suppose A solves on 1-1/poly(n) fraction of inputs of length n Then A solves L on all but finitely many inputs Exercise: prove it

43 Completeness Suppose we believe there is L in NP, D samplable distribution, such that is hard Can we point to a specific problem C such that is also hard?

44 Completeness Suppose we believe there is L in NP, D samplable distribution, such that is hard Can we point to a specific problem C such that is also hard? Yes we can! [Levin, Impagliazzo-Levin]

45 Levin’s Completeness Result There is an NP problem C, such that If there is L in NP, D computable distribution, such that is hard Then is also hard

46

47 Reduction Need to define reduction that preserves efficiency on average (Note: we haven’t yet defined efficiency on average) R is a (Karp) average-case reduction from to if 1.x in A iff R(x) in B 2.R(D A ) is “dominated” by D B : Pr[ R(D A )=y] < poly(n) * Pr [D B = y]

48 Reduction R is an average-case reduction from to if x in A iff R(x) in B R(D A ) is “dominated” by D B : Pr[ R(D A )=y] < poly(n) * Pr [D B = y] Suppose we have good algorithm for Then algorithm also good for Solving reduces to solving

49 Reduction If Pr[ Y=y] < poly(n) * Pr [D B = y] and we have good algorithm for Then algorithm also good for Reduction works for any notion of average-case tractability for which above is true.

50 Levin’s Completeness Result Follow presentation of [Goldreich] If is easy on average Then for every L in NP, every D computable distribution, is easy on average BH is non-deterministic Bounded Halting: given, does M(x) accept with t steps?

51 Levin’s Completeness Result BH, non-deterministic Bounded Halting: given, does M(x) accept with t steps? Suppose we have good-on-average alg A Want to solve, where L solvable by NDTM M First try: x ->

52 Levin’s Completeness Result First try: x -> Doesn’t work: x may have arbitrary distribution, we need target string to be nearly uniform (high entropy) Second try: x -> Where C() is near-optimal compression alg, M’ recover x from C(x), then runs M

53 Levin’s Completeness Result Second try: x -> Where C() is near-optimal compression alg, M’ recover x from C(x), then runs M Works! Provided C(x) has length at most O(log n) + log 1/Pr D [x] Possible if cumulative distribution function of D is computable.

54 Impagliazzo-Levin Do the same but for all samplable distribution Samplable distribution not necessarily efficiently compressible in coding theory sense. (E.g. output of PRG) Hashing provides “non-constructive” compression

55 Complete Problems BH with Uniform distribution Tiling problem with Uniform distribution [Levin] Generalized edge-coloring [Venkatesan-Levin] Matrix representability [Venkatesan-Rajagopalan] Matrix transformation [Gurevich]...

56 Open Question 3 L in NP, M NDTM for L is specified by k bits Levin’s reduction incurs 2 k bits in fraction of “problematic” inputs (comparable to having 2 k slowdown) Limited to problems having non-deterministic algorithm of 5 bytes Inherent?

57 More Reductions? Still relatively few complete problems Similar to study of inapproximability before Papadimitriou-Yannakakis and PCP Would be good, as in Papadimitriou-Yannakakis, to find reductions between problems that are not known to be complete but are plausibly hard

58 Open Question 4 (Heard from Russell Impagliazzo) Prove that If 3SAT is hard on instances with n variables and 10n clauses, Then it is also hard on instances with 12n clauses

59 See http://www.cs.berkeley.edu/~luca/average [slides, references, addendum to Bogdanov-T, coming soon] http://www.cs.berkeley.edu/~luca/average http://www.cs.uml.edu/~wang/acc-forum/ [average-case complexity forum] http://www.cs.uml.edu/~wang/acc-forum/ Impagliazzo A personal view of average-case complexity Structures’95 Goldreich Notes on Levin’s theory of average-case complexity ECCC TR-97-56 Bogdanov-T. Average case complexity F&TTCS 2(1): (2006)


Download ppt "Average-case Complexity Luca Trevisan UC Berkeley."

Similar presentations


Ads by Google