Hardness Amplification within NP against Deterministic Algorithms Parikshit Gopalan U Washington & MSR-SVC Venkatesan Guruswami U Washington & IAS.

Slides:



Advertisements
Similar presentations
Hardness of Reconstructing Multivariate Polynomials. Parikshit Gopalan U. Washington Parikshit Gopalan U. Washington Subhash Khot NYU/Gatech Rishi Saket.
Advertisements

Unconditional Weak derandomization of weak algorithms Explicit versions of Yao s lemma Ronen Shaltiel, University of Haifa :
Pseudorandomness from Shrinkage David Zuckerman University of Texas at Austin Joint with Russell Impagliazzo and Raghu Meka.
Computing with adversarial noise Aram Harrow (UW -> MIT) Matt Hastings (Duke/MSR) Anup Rao (UW)
Linear-Degree Extractors and the Inapproximability of Max Clique and Chromatic Number David Zuckerman University of Texas at Austin.
Complexity Theory Lecture 6
Randomness Extractors: Motivation, Applications and Constructions Ronen Shaltiel University of Haifa.
Russell Impagliazzo ( IAS & UCSD ) Ragesh Jaiswal ( Columbia U. ) Valentine Kabanets ( IAS & SFU ) Avi Wigderson ( IAS ) ( based on [IJKW08, IKW09] )
Of 31 March 2, 2011Local List IPAM 1 Local List Decoding Madhu Sudan Microsoft Research TexPoint fonts used in EMF. TexPoint fonts used in EMF.
Direct Product : Decoding & Testing, with Applications Russell Impagliazzo (IAS & UCSD) Ragesh Jaiswal (Columbia) Valentine Kabanets (SFU) Avi Wigderson.
Direct-Product testing Parallel Repetitions And Foams Avi Wigderson IAS.
Average-case Complexity Luca Trevisan UC Berkeley.
A threshold of ln(n) for approximating set cover By Uriel Feige Lecturer: Ariel Procaccia.
Multiplicity Codes Swastik Kopparty (Rutgers) (based on [K-Saraf-Yekhanin ’11], [K ‘12], [K ‘14])
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Talk for Topics course. Pseudo-Random Generators pseudo-random bits PRG seed Use a short “ seed ” of very few truly random bits to generate a long string.
Simple extractors for all min- entropies and a new pseudo- random generator Ronen Shaltiel Chris Umans.
Isolation Technique April 16, 2001 Jason Ku Tao Li.
Inapproximability of MAX-CUT Khot,Kindler,Mossel and O ’ Donnell Moshe Ben Nehemia June 05.
Locally Decodable Codes from Nice Subsets of Finite Fields and Prime Factors of Mersenne Numbers Kiran Kedlaya Sergey Yekhanin MIT Microsoft Research.
Hardness amplification proofs require majority Ronen Shaltiel University of Haifa Joint work with Emanuele Viola Columbia University June 2008.
Better Pseudorandom Generators from Milder Pseudorandom Restrictions Raghu Meka (IAS) Parikshit Gopalan, Omer Reingold (MSR-SVC) Luca Trevian (Stanford),
Using Nondeterminism to Amplify Hardness Emanuele Viola Joint work with: Alex Healy and Salil Vadhan Harvard University.
Time vs Randomness a GITCS presentation February 13, 2012.
Linear-time encodable and decodable error-correcting codes Daniel A. Spielman Presented by Tian Sang Jed Liu 2003 March 3rd.
On Uniform Amplification of Hardness in NP Luca Trevisan STOC 05 Paper Review Present by Hai Xu.
Correcting Errors Beyond the Guruswami-Sudan Radius Farzad Parvaresh & Alexander Vardy Presented by Efrat Bank.
Arithmetic Hardness vs. Randomness Valentine Kabanets SFU.
Locally testable cyclic codes Lászl ó Babai, Amir Shpilka, Daniel Štefankovič There are no good families of locally-testable cyclic codes over. Theorem:
EXPANDER GRAPHS Properties & Applications. Things to cover ! Definitions Properties Combinatorial, Spectral properties Constructions “Explicit” constructions.
CS151 Complexity Theory Lecture 10 April 29, 2004.
Hardness amplification proofs require majority Emanuele Viola Columbia University Work done at Harvard, IAS, and Columbia Joint work with Ronen Shaltiel.
Approximate List- Decoding and Uniform Hardness Amplification Russell Impagliazzo (UCSD) Ragesh Jaiswal (UCSD) Valentine Kabanets (SFU)
1 High noise regime Desire code C : {0,1} k  {0,1} n such that (1/2-  ) fraction of errors can be corrected (think  = o(1) )  Want small n  Efficient.
CS151 Complexity Theory Lecture 9 April 27, 2004.
Ragesh Jaiswal Indian Institute of Technology Delhi Threshold Direct Product Theorems: a survey.
Pseudorandomness Emanuele Viola Columbia University April 2008.
Direct-product testing, and a new 2-query PCP Russell Impagliazzo (IAS & UCSD) Valentine Kabanets (SFU) Avi Wigderson (IAS)
XOR lemmas & Direct Product thms - Many proofs Avi Wigderson IAS, Princeton ’82 Yao ’87 Levin ‘89 Goldreich-Levin ’95 Impagliazzo ‘95 Goldreich-Nisan-Wigderson.
Using Nondeterminism to Amplify Hardness Emanuele Viola Joint work with: Alex Healy and Salil Vadhan Harvard University.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
Polynomials Emanuele Viola Columbia University work partially done at IAS and Harvard University December 2007.
Amplification and Derandomization Without Slowdown Dana Moshkovitz MIT Joint work with Ofer Grossman (MIT)
My Favorite Ten Complexity Theorems of the Past Decade II Lance Fortnow University of Chicago.
1 Asymptotically good binary code with efficient encoding & Justesen code Tomer Levinboim Error Correcting Codes Seminar (2008)
List Decoding Using the XOR Lemma Luca Trevisan U.C. Berkeley.
Hardness amplification proofs require majority Emanuele Viola Columbia University Work also done at Harvard and IAS Joint work with Ronen Shaltiel University.
1 List decoding of binary codes: New concatenation-based results Venkatesan Guruswami U. Washington Joint work with Atri Rudra.
Umans Complexity Theory Lectures
Sublinear-Time Error-Correction and Error-Detection
Locality in Coding Theory
Sublinear-Time Error-Correction and Error-Detection
Error-Correcting Codes:
Pseudorandomness when the odds are against you
Direct product testing
Algebraic Codes and Invariance
Tight Fourier Tails for AC0 Circuits
Local Error-Detection and Error-correction
An average-case lower bound against ACC0
My Favorite Ten Complexity Theorems of the Past Decade II
Umans Complexity Theory Lectures
Linear sketching with parities
Locally Decodable Codes from Lifting
Uncertain Compression
Linear sketching over
On the effect of randomness on planted 3-coloring models
Linear sketching with parities
CS151 Complexity Theory Lecture 10 May 2, 2019.
Soft decoding, dual BCH codes, & better -biased list decodable codes
Zeev Dvir (Princeton) Shachar Lovett (IAS)
Presentation transcript:

Hardness Amplification within NP against Deterministic Algorithms Parikshit Gopalan U Washington & MSR-SVC Venkatesan Guruswami U Washington & IAS

Why Hardness Amplification Goal: Show there are hard problems in NP. Lower bounds out of reach. Cryptography, Derandomization require average case hardness. Revised Goal: Relate various kinds of hardness assumptions. Hardness Amplification: Start with mild hardness, amplify.

Hardness Amplification Generic Amplification Theorem: If there are problems in class A that are mildly hard for algorithms in Z, then there are problems in A that are very hard for Z. NP, EXP, PSPACE P/poly, BPP, P

PSPACE versus P/poly, BPP Long line of work: Theorem: If there are problems in PSPACE that are worst case hard for P/poly (BPP), then there are problems that are ½ + hard for P/poly(BPP). Yao, Nisan-Wigderson, Babai-Fortnow-Nisan-Wigderson, Impagliazzo, Impagliazzo-Wigderson1, Impagliazzo- Wigderson2, Sudan-Trevisan-Vadhan, Trevisan-Vadhan, Impagliazzo-Jaiswal-Kabanets, Impagliazzo-Jaiswal-Kabanets- Wigderson.

NP versus P/poly ODonnell. Theorem: If there are problems in NP that are 1 - hard for P/poly, then there are problems that are ½ + hard. Starts from average-case assumption. Healy-Vadhan-Viola.

NP versus BPP Trevisan03. Theorem: If there are problems in NP that are 1 - hard for BPP, then there are problems that are ¾ + hard.

NP versus BPP Trevisan05. Theorem: If there are problems in NP that are 1 - hard for BPP, then there are problems that are ½ + hard. BureshOppenheim-Kabanets-Santhanam: alternate proof via monotone codes. Optimal up to.

Our results Amplification against P. Theorem 1: If there is a problem in NP that is 1 - hard for P, then there is a problem which is ¾ + hard. Theorem 2: If there is a problem in PSPACE that is1 - hard for P, then there is a problem which is ¾ + hard. Trevisan: 1 - hardness to 7/8 + for PSPACE. Goldreich-Wigderson: Unconditional hardness for EXP against P. = 1/n 100 = 1/(log n) 100

Outline of This Talk: 1. Amplification via Decoding. 2. Deterministic Local Decoding. 3. Amplification within NP.

Outline of This Talk: 1. Amplification via Decoding. 2. Deterministic Local Decoding. 3. Amplification within NP.

Amplification via Decoding Trevisan, Sudan-Trevisan-Vadhan Encode f: Mildly hard g: Wildly hard Decode Approx. to g f

Amplification via Decoding. Case Study: PSPACE versus BPP Encode f: Mildly hard g: Wildly hard fs table has size 2 n. gs table has size 2 n 2. Encoding in space n 100. PSPACE

Amplification via Decoding. Case Study: PSPACE versus BPP Decode BPP Randomized local decoder. List-decoding beyond ¼ error. Approx. to g f

Amplification via Decoding. Case Study: NP versus BPP Encode f: Mildly hard g: Wildly hard g is a monotone function M of f. M is computable in NTIME(n 100 ) M needs to be noise-sensitive. NP

Amplification via Decoding. Case Study: NP versus BPP Decode Randomized local decoder. Monotone codes are bad codes. Can only approximate f. BPP Approx. to g Approx. to f

Outline of This Talk: 1. Amplification via Decoding. 2. Deterministic Local Decoding. 3. Amplification within NP.

Deterministic Amplification Decode P Deterministic local decoding?

Deterministic Amplification Decode Can force an error on any bit. Need near- linear length encoding. Monotone codes for NP. P Deterministic local decoding? 2n2n 2 n n 100

Deterministic Local Decoding … … up to unique decoding radius. Deterministic local decoding up to 1 - from ¾ + agreement. Monotone code construction with similar parameters. Main tool: ABNNR codes + GMD decoding. [Guruswami-Indyk, Akavia-Venkatesan] Open Problem: Go beyond Unique Decoding.

The ABNNR Construction. Expander graph. 2 n vertices. Degree n 100.

The ABNNR Construction Expander graph. 2 n vertices. Degree n 100.

The ABNNR Construction Start with a binary code with small distance. Gives a code of large distance over large alphabet. Expander graph. 2 n vertices. Degree n 100.

Concatenated ABNNR Codes Inner code of distance ½. Binary code of distance ½. [GI]: ¼ error, not local. [T]: 1/8 error, local.

Decoding ABNNR Codes

Decoding ABNNR Codes Decode inner codes. Works if error < ¼. Fails if error > ¼.

Decoding ABNNR Codes Majority vote on the LHS. [Trevisan]: Corrects 1/8 fraction of errors.

GMD decoding [Forney67] c 2 [0,1] If decoding succeeds, error 2 [0, ¼]. If 0 error, confidence is 1. If ¼ error, confidence is 0. c = (1 – 4 ). Could return wrong answer with high confidence… … but this requires close to ½.

GMD Decoding for ABNNR Codes c c c c c GMD decoding: Pick threshold, erase, decode. Non-local. Our approach: Weighted Majority. Thm: Corrects ¼ fraction of errors locally.

GMD Decoding for ABNNR Codes c c c c c Thm: GMD decoding corrects ¼ fraction of error. Proof Sketch: 1. Globally, good nodes have more confidence than bad nodes. 2. Locally, this holds for most neighborhoods of vertices on LHS. Proof similar to Expander Mixing Lemma.

Outline of This Talk: 1. Amplification via Decoding. 2. Deterministic Local Decoding. 3. Amplification within NP. Finding an inner monotone code [BOKS]. Implementing GMD decoding.

The BOKS construction k krkr x T(x) T(x) : Sample an r-tuple from x, apply the Tribes function. If x, y are balanced, and (x,y) >, (T(x),T(y)) ¼ ½. If x, y are very close, so are T(x), T(y). Decoding: brute force.

GMD Decoding for Monotone codes c c c c c Start with a balanced f, apply concatenated ABNNR. Inner decoder returns closest balanced message. Apply GMD decoding. Thm: Decoder corrects ¼ fraction of error approximately. Analysis becomes harder.

GMD Decoding for Monotone codes c c c c c Inner decoder finds the closest balanced message. Assume 0 error: Decoder need not return message. Good nodes have few errors, Bad nodes have many. Thm: Decoder corrects ¼ fraction of error approximately.

Beyond Unique Decoding… Deterministic local list-decoder: Set L of machines such that: - For any received word - Every nearby codeword is computed by some M 2 L. Is this possible?