Presentation is loading. Please wait.

Presentation is loading. Please wait.

On Uniform Amplification of Hardness in NP Luca Trevisan STOC 05 Paper Review Present by Hai Xu.

Similar presentations


Presentation on theme: "On Uniform Amplification of Hardness in NP Luca Trevisan STOC 05 Paper Review Present by Hai Xu."— Presentation transcript:

1 On Uniform Amplification of Hardness in NP Luca Trevisan STOC 05 Paper Review Present by Hai Xu

2 Uniform Algorithm Averaged over random inputs Averaged over the internal coin tosses of the algorithm

3 Amplification of Hardness Starting from a problem that is know (or assumed) to be hard on average in a weak sense, we can define a related new problem which is hard on average in the strongest possible sense

4 Yao’s XOR Lemma From a Boolean function f: {0,1} n →{0,1}, we define a new function f  k (x 1,…,x k ) = f(x 1 )  …  f(x k ). Yao’s XOR Lemma says if every circuit of size ≤ S makes at least a δ faction of errors in computing f(x) for a random x, then every circuit of size ≤ Spoly(δε/k) makes at least a 1/2 – ε fraction of errors in computing f  k (), where ε is constant and roughly Ω((1-δ) k )

5 Amplification of Hardness in NP Want to prove: if L is a language in NP such that every efficient algorithm (or small family of circuits) errs on at least a 1/poly(n) fraction of inputs of length n, then there is a language L’ also in NP such that every efficient algorithm (or small circuit) errs on a 1/2−1/n(1) fraction of inputs Yao’s XOR Lemma cannot prove this directly

6 Previous Work O’Donnell proved that for every balanced Boolean function f: {0,1} n →{0,1} and parameters ε, δ (>0), there is an integer k = poly(1/ ε, 1/ δ) and a monotone function g: {0,1} k →{0,1}, such that if every circuit of size S makes at least a δ fraction of errors in computing f(x) given x, then every circuit of size S poly( ε, δ) makes at least a 1/2 − ε fraction of errors in computing f g,k = g(f(x 1 ),.., f(x k )) given (x 1,.., x k ), then there is a circuit of size poly(1/ ε, 1/ δ)S that makes at most a δ fraction of errors in computing f(x)

7 Balanced Problems This proof only works for balanced decision problems, i.e., for a random instance of a given length n, there is a probability 1/2 that the answer is YES and a probability 1/2 that the answer is NO

8 Some Improvement For balanced problems, Dr. O’Donnell proved the amplification of hardness from 1- 1/poly(n) to 1/2 +1/n 1/2- ε For general problems, he proved the amplification of hardness from 1-1/poly(n) to 1/2 +1/n 1/3- ε For balanced problems, Healy et al improved the amplification range from 1-1/poly(n) to 1/2 +1/poly(n)

9 Limitation of Previous Work All above proofs are all based on balanced problems

10 Dr. Trevisan’s Previous Contribution In FOCS 03, Dr. Trevisan proved: With every language L in NP, there is a probabilistic poly-time algorithm that accept probability ≥ 3/4+1/(log n) α with input length n. Then we can extend this range to 1-1/p(n). He improved the amplification range from 1-1/(log n) α to 3/4 +1/(log n) α, where α > 0 and α=const

11 Major Contribution of This Paper The uniform analysis of amplification of hardness using the majority function Proved lemma that with every language L in NP, there is a probabilistic poly-time algorithm that accept probability ≥ 1/2+1/(log n) α with input length n. Then for every language in NP and polynomial p, there is a probabilistic poly-time algorithm that succeeds with probability 1-1/p(n) on input with length n, where α > 0 and α=const

12 Majority Function If L ∈ NP with input length n and f: {0,1} n →{0,1} to the Boolean function, for an odd integer k, we define:

13 Proof Main Idea I For every problem in NP, there is an efficient algorithm that solves the problem on a 1- ε fraction of inputs with length n, then for every problem in NP, there is an efficient algorithm that solve the problem on a 1-1/p(n) fraction with input length n with a small amount of non-uniformity

14 Proof Main Idea II Based on the balanced function f(), a new function, is introduced. If an efficient algorithm can solve F on a 1- ε fraction of inputs, then there will be an efficient algorithm that can solve f on a fraction of inputs (ε is a positive constant). To simplify the proof procedure, t is set to 1/5 in this paper.

15 Proof Main Idea III For every search problem in NP and every p(), there is an efficient algorithm that solves the search problem on a 1-1/p(n) fraction with input length n and a small amount of non-uniformity Eliminating the non-uniformity

16 Detailed Proof of II We want to prove: – L in NP and L’ is poly bounded by a computable function l(n) – If circuit C’ solves L’ on α≥1- ε with input length l(n) – Then another circuit C can solve L on with input length n

17 Parameter Settings t = 1/5 a>δ Then we further define:

18 Proof Solve δ i and k i recursively, we obtain: Let r to be the largest index such that δ r <α. Then we could know We also could know that the length of f r is:

19 Proof cont’d Based on majority function, we can let: g() is the majority function and With recursively apply a lemma, we are able to obtain circuit C 0 agrees f on at least fraction of inputs

20 Proof cont’d Now we need to create another circuit Cr with input length nK and Then Cr should agree fraction of inputs Furthermore, we could construct a circuit C which agrees with f at least faction of inputs Finally, we conclude that C agrees with L on at least fraction of inputs

21 Conclusion With the proof, we are able to convert a small error into a larger number of error Generalized the amplification of hardness problem in NP Input length is an important factor to decide the success probability

22 Acknowlegement Thanks for the great help from Fenghui Zhang and Jie Meng

23 Homework Mathematically describe one of the major contribution of this paper, and what is the improvement than previous work? Due on May 3, 2006


Download ppt "On Uniform Amplification of Hardness in NP Luca Trevisan STOC 05 Paper Review Present by Hai Xu."

Similar presentations


Ads by Google