On Uniform Amplification of Hardness in NP Luca Trevisan STOC 05 Paper Review Present by Hai Xu.

Slides:



Advertisements
Similar presentations
Limitations of Quantum Advice and One-Way Communication Scott Aaronson UC Berkeley IAS Useful?
Advertisements

Scott Aaronson Institut pour l'Étude Avançée Le Principe de la Postselection.
Unconditional Weak derandomization of weak algorithms Explicit versions of Yao s lemma Ronen Shaltiel, University of Haifa :
Low-End Uniform Hardness vs. Randomness Tradeoffs for Arthur-Merlin Games. Ronen Shaltiel, University of Haifa Chris Umans, Caltech.
Models of Computation Prepared by John Reif, Ph.D. Distinguished Professor of Computer Science Duke University Analysis of Algorithms Week 1, Lecture 2.
Complexity Theory Lecture 6
Shortest Vector In A Lattice is NP-Hard to approximate
Hardness of Approximating Multicut S. Chawla, R. Krauthgamer, R. Kumar, Y. Rabani, D. Sivakumar (2005) Presented by Adin Rosenberg.
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Many-to-one Trapdoor Functions and their Relations to Public-key Cryptosystems M. Bellare S. Halevi A. Saha S. Vadhan.
Quantum Computing MAS 725 Hartmut Klauck NTU
Department of Computer Science & Engineering
Learning Juntas Elchanan Mossel UC Berkeley Ryan O’Donnell MIT Rocco Servedio Harvard.
Complexity class NP Is the class of languages that can be verified by a polynomial-time algorithm. L = { x in {0,1}* | there exists a certificate y with.
Probabilistic algorithms Section 10.2 Giorgi Japaridze Theory of Computability.
Using Nondeterminism to Amplify Hardness Emanuele Viola Joint work with: Alex Healy and Salil Vadhan Harvard University.
Probabilistic Algorithms Michael Sipser Presented by: Brian Lawnichak.
Umans Complexity Theory Lectures Lecture 15: Approximation Algorithms and Probabilistically Checkable Proofs (PCPs)
PCPs and Inapproximability Introduction. My T. Thai 2 Why Approximation Algorithms  Problems that we cannot find an optimal solution.
1 L is in NP means: There is a language L’ in P and a polynomial p so that L 1 · L 2 means: For some polynomial time computable map r : 8 x: x 2 L 1 iff.
Complexity 18-1 Complexity Andrei Bulatov Probabilistic Algorithms.
Advanced Topics in Algorithms and Data Structures
CS151 Complexity Theory Lecture 6 April 15, 2015.
CS151 Complexity Theory Lecture 5 April 13, 2015.
CS151 Complexity Theory Lecture 7 April 20, 2004.
CS151 Complexity Theory Lecture 5 April 13, 2004.
Derandomization: New Results and Applications Emanuele Viola Harvard University March 2006.
Computability and Complexity 32-1 Computability and Complexity Andrei Bulatov Boolean Circuits.
Submitted by : Estrella Eisenberg Yair Kaufman Ohad Lipsky Riva Gonen Shalom.
Complexity 19-1 Complexity Andrei Bulatov More Probabilistic Algorithms.
–Def: A language L is in BPP c,s ( 0  s(n)  c(n)  1,  n  N) if there exists a probabilistic poly-time TM M s.t. : 1.  w  L, Pr[M accepts w]  c(|w|),
Princeton University COS 433 Cryptography Fall 2005 Boaz Barak COS 433: Cryptography Princeton University Fall 2005 Boaz Barak Lecture 3: Computational.
Chapter 11: Limitations of Algorithmic Power
CS151 Complexity Theory Lecture 6 April 15, 2004.
Complexity ©D. Moshkovitz 1 And Randomized Computations The Polynomial Hierarchy.
Toward NP-Completeness: Introduction Almost all the algorithms we studies so far were bounded by some polynomial in the size of the input, so we call them.
Quantum Algorithms II Andrew C. Yao Tsinghua University & Chinese U. of Hong Kong.
1 The PCP starting point. 2 Overview In this lecture we’ll present the Quadratic Solvability problem. In this lecture we’ll present the Quadratic Solvability.
Approximate List- Decoding and Uniform Hardness Amplification Russell Impagliazzo (UCSD) Ragesh Jaiswal (UCSD) Valentine Kabanets (SFU)
Complexity Theory: The P vs NP question Lecture 28 (Dec 4, 2007)
Approximating the MST Weight in Sublinear Time Bernard Chazelle (Princeton) Ronitt Rubinfeld (NEC) Luca Trevisan (U.C. Berkeley)
Ragesh Jaiswal Indian Institute of Technology Delhi Threshold Direct Product Theorems: a survey.
Approximation Algorithms Pages ADVANCED TOPICS IN COMPLEXITY THEORY.
CS151 Complexity Theory Lecture 13 May 11, Outline proof systems interactive proofs and their power Arthur-Merlin games.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
Quantum Computing MAS 725 Hartmut Klauck NTU
Using Nondeterminism to Amplify Hardness Emanuele Viola Joint work with: Alex Healy and Salil Vadhan Harvard University.
Interactive proof systems Section 10.4 Giorgi Japaridze Theory of Computability.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
Manipulating the Quota in Weighted Voting Games (M. Zuckerman, P. Faliszewski, Y. Bachrach, and E. Elkind) ‏ Presented by: Sen Li Software Technologies.
Non-Approximability Results. Summary -Gap technique -Examples: MINIMUM GRAPH COLORING, MINIMUM TSP, MINIMUM BIN PACKING -The PCP theorem -Application:
Amplification and Derandomization Without Slowdown Dana Moshkovitz MIT Joint work with Ofer Grossman (MIT)
1Computer Sciences Department. Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department.
List Decoding Using the XOR Lemma Luca Trevisan U.C. Berkeley.
Pseudo-random generators Talk for Amnon ’ s seminar.
Lecture. Today Problem set 9 out (due next Thursday) Topics: –Complexity Theory –Optimization versus Decision Problems –P and NP –Efficient Verification.
1 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Richard Cleve DC 2117 Lectures
Pseudorandomness: New Results and Applications Emanuele Viola IAS April 2007.
Complexity Theory and Explicit Constructions of Ramsey Graphs Rahul Santhanam University of Edinburgh.
P & NP.
From Classical Proof Theory to P vs. NP
Probabilistic Algorithms
Circuit Lower Bounds A combinatorial approach to P vs NP
Pseudorandomness when the odds are against you
Analysis and design of algorithm
An average-case lower bound against ACC0
CSE838 Lecture notes copy right: Moon Jung Chung
Indistinguishability by adaptive procedures with advice, and lower bounds on hardness amplification proofs Aryeh Grinberg, U. Haifa Ronen.
NP-Completeness Yin Tat Lee
On Derandomizing Algorithms that Err Extremely Rarely
Presentation transcript:

On Uniform Amplification of Hardness in NP Luca Trevisan STOC 05 Paper Review Present by Hai Xu

Uniform Algorithm Averaged over random inputs Averaged over the internal coin tosses of the algorithm

Amplification of Hardness Starting from a problem that is know (or assumed) to be hard on average in a weak sense, we can define a related new problem which is hard on average in the strongest possible sense

Yao’s XOR Lemma From a Boolean function f: {0,1} n →{0,1}, we define a new function f  k (x 1,…,x k ) = f(x 1 )  …  f(x k ). Yao’s XOR Lemma says if every circuit of size ≤ S makes at least a δ faction of errors in computing f(x) for a random x, then every circuit of size ≤ Spoly(δε/k) makes at least a 1/2 – ε fraction of errors in computing f  k (), where ε is constant and roughly Ω((1-δ) k )

Amplification of Hardness in NP Want to prove: if L is a language in NP such that every efficient algorithm (or small family of circuits) errs on at least a 1/poly(n) fraction of inputs of length n, then there is a language L’ also in NP such that every efficient algorithm (or small circuit) errs on a 1/2−1/n(1) fraction of inputs Yao’s XOR Lemma cannot prove this directly

Previous Work O’Donnell proved that for every balanced Boolean function f: {0,1} n →{0,1} and parameters ε, δ (>0), there is an integer k = poly(1/ ε, 1/ δ) and a monotone function g: {0,1} k →{0,1}, such that if every circuit of size S makes at least a δ fraction of errors in computing f(x) given x, then every circuit of size S poly( ε, δ) makes at least a 1/2 − ε fraction of errors in computing f g,k = g(f(x 1 ),.., f(x k )) given (x 1,.., x k ), then there is a circuit of size poly(1/ ε, 1/ δ)S that makes at most a δ fraction of errors in computing f(x)

Balanced Problems This proof only works for balanced decision problems, i.e., for a random instance of a given length n, there is a probability 1/2 that the answer is YES and a probability 1/2 that the answer is NO

Some Improvement For balanced problems, Dr. O’Donnell proved the amplification of hardness from 1- 1/poly(n) to 1/2 +1/n 1/2- ε For general problems, he proved the amplification of hardness from 1-1/poly(n) to 1/2 +1/n 1/3- ε For balanced problems, Healy et al improved the amplification range from 1-1/poly(n) to 1/2 +1/poly(n)

Limitation of Previous Work All above proofs are all based on balanced problems

Dr. Trevisan’s Previous Contribution In FOCS 03, Dr. Trevisan proved: With every language L in NP, there is a probabilistic poly-time algorithm that accept probability ≥ 3/4+1/(log n) α with input length n. Then we can extend this range to 1-1/p(n). He improved the amplification range from 1-1/(log n) α to 3/4 +1/(log n) α, where α > 0 and α=const

Major Contribution of This Paper The uniform analysis of amplification of hardness using the majority function Proved lemma that with every language L in NP, there is a probabilistic poly-time algorithm that accept probability ≥ 1/2+1/(log n) α with input length n. Then for every language in NP and polynomial p, there is a probabilistic poly-time algorithm that succeeds with probability 1-1/p(n) on input with length n, where α > 0 and α=const

Majority Function If L ∈ NP with input length n and f: {0,1} n →{0,1} to the Boolean function, for an odd integer k, we define:

Proof Main Idea I For every problem in NP, there is an efficient algorithm that solves the problem on a 1- ε fraction of inputs with length n, then for every problem in NP, there is an efficient algorithm that solve the problem on a 1-1/p(n) fraction with input length n with a small amount of non-uniformity

Proof Main Idea II Based on the balanced function f(), a new function, is introduced. If an efficient algorithm can solve F on a 1- ε fraction of inputs, then there will be an efficient algorithm that can solve f on a fraction of inputs (ε is a positive constant). To simplify the proof procedure, t is set to 1/5 in this paper.

Proof Main Idea III For every search problem in NP and every p(), there is an efficient algorithm that solves the search problem on a 1-1/p(n) fraction with input length n and a small amount of non-uniformity Eliminating the non-uniformity

Detailed Proof of II We want to prove: – L in NP and L’ is poly bounded by a computable function l(n) – If circuit C’ solves L’ on α≥1- ε with input length l(n) – Then another circuit C can solve L on with input length n

Parameter Settings t = 1/5 a>δ Then we further define:

Proof Solve δ i and k i recursively, we obtain: Let r to be the largest index such that δ r <α. Then we could know We also could know that the length of f r is:

Proof cont’d Based on majority function, we can let: g() is the majority function and With recursively apply a lemma, we are able to obtain circuit C 0 agrees f on at least fraction of inputs

Proof cont’d Now we need to create another circuit Cr with input length nK and Then Cr should agree fraction of inputs Furthermore, we could construct a circuit C which agrees with f at least faction of inputs Finally, we conclude that C agrees with L on at least fraction of inputs

Conclusion With the proof, we are able to convert a small error into a larger number of error Generalized the amplification of hardness problem in NP Input length is an important factor to decide the success probability

Acknowlegement Thanks for the great help from Fenghui Zhang and Jie Meng

Homework Mathematically describe one of the major contribution of this paper, and what is the improvement than previous work? Due on May 3, 2006