Download presentation

Presentation is loading. Please wait.

Published byGrace Gunn Modified over 4 years ago

1
Hardness Amplification within NP against Deterministic Algorithms Parikshit Gopalan U Washington & MSR-SVC Venkatesan Guruswami U Washington & IAS

2
Why Hardness Amplification Goal: Show there are hard problems in NP. Lower bounds out of reach. Cryptography, Derandomization require average case hardness. Revised Goal: Relate various kinds of hardness assumptions. Hardness Amplification: Start with mild hardness, amplify.

3
Hardness Amplification Generic Amplification Theorem: If there are problems in class A that are mildly hard for algorithms in Z, then there are problems in A that are very hard for Z. NP, EXP, PSPACE P/poly, BPP, P

4
PSPACE versus P/poly, BPP Long line of work: Theorem: If there are problems in PSPACE that are worst case hard for P/poly (BPP), then there are problems that are ½ + hard for P/poly(BPP). Yao, Nisan-Wigderson, Babai-Fortnow-Nisan-Wigderson, Impagliazzo, Impagliazzo-Wigderson1, Impagliazzo- Wigderson2, Sudan-Trevisan-Vadhan, Trevisan-Vadhan, Impagliazzo-Jaiswal-Kabanets, Impagliazzo-Jaiswal-Kabanets- Wigderson.

5
NP versus P/poly ODonnell. Theorem: If there are problems in NP that are 1 - hard for P/poly, then there are problems that are ½ + hard. Starts from average-case assumption. Healy-Vadhan-Viola.

6
NP versus BPP Trevisan03. Theorem: If there are problems in NP that are 1 - hard for BPP, then there are problems that are ¾ + hard.

7
NP versus BPP Trevisan05. Theorem: If there are problems in NP that are 1 - hard for BPP, then there are problems that are ½ + hard. BureshOppenheim-Kabanets-Santhanam: alternate proof via monotone codes. Optimal up to.

8
Our results Amplification against P. Theorem 1: If there is a problem in NP that is 1 - hard for P, then there is a problem which is ¾ + hard. Theorem 2: If there is a problem in PSPACE that is1 - hard for P, then there is a problem which is ¾ + hard. Trevisan: 1 - hardness to 7/8 + for PSPACE. Goldreich-Wigderson: Unconditional hardness for EXP against P. = 1/n 100 = 1/(log n) 100

9
Outline of This Talk: 1. Amplification via Decoding. 2. Deterministic Local Decoding. 3. Amplification within NP.

10
Outline of This Talk: 1. Amplification via Decoding. 2. Deterministic Local Decoding. 3. Amplification within NP.

11
Amplification via Decoding Trevisan, Sudan-Trevisan-Vadhan 101100101100 Encode 101100101101100101 f: Mildly hard g: Wildly hard 100110011100110011 Decode 101100101100 Approx. to g f

12
Amplification via Decoding. Case Study: PSPACE versus BPP. 101100101100 Encode 101100101101100101 f: Mildly hard g: Wildly hard fs table has size 2 n. gs table has size 2 n 2. Encoding in space n 100. PSPACE

13
Amplification via Decoding. Case Study: PSPACE versus BPP. 100110011100110011 Decode 101100101100 BPP Randomized local decoder. List-decoding beyond ¼ error. Approx. to g f

14
Amplification via Decoding. Case Study: NP versus BPP. 101100101100 Encode 101100101101100101 f: Mildly hard g: Wildly hard g is a monotone function M of f. M is computable in NTIME(n 100 ) M needs to be noise-sensitive. NP

15
Amplification via Decoding. Case Study: NP versus BPP. 100110011100110011 Decode 101000101000 Randomized local decoder. Monotone codes are bad codes. Can only approximate f. BPP Approx. to g Approx. to f

16
Outline of This Talk: 1. Amplification via Decoding. 2. Deterministic Local Decoding. 3. Amplification within NP.

17
Deterministic Amplification. 100110011100110011 Decode 101100101100 P Deterministic local decoding?

18
Deterministic Amplification. 100110011100110011 Decode 101100101100 Can force an error on any bit. Need near- linear length encoding. Monotone codes for NP. P Deterministic local decoding? 2n2n 2 n n 100

19
Deterministic Local Decoding … … up to unique decoding radius. Deterministic local decoding up to 1 - from ¾ + agreement. Monotone code construction with similar parameters. Main tool: ABNNR codes + GMD decoding. [Guruswami-Indyk, Akavia-Venkatesan] Open Problem: Go beyond Unique Decoding.

20
The ABNNR Construction. Expander graph. 2 n vertices. Degree n 100.

21
The ABNNR Construction. 0 0 1 0 1 Expander graph. 2 n vertices. Degree n 100.

22
The ABNNR Construction. 0 0 1 1 0 0 1 0 1 0 0 0 1 0 1 0 1 0 0 1 Start with a binary code with small distance. Gives a code of large distance over large alphabet. Expander graph. 2 n vertices. Degree n 100.

23
Concatenated ABNNR Codes. 0 0 1 1 0 0 1 0 1 0 0 0 1 0 1 0 1 0 0 1 1 0 1 0 1 1 0 1 1 0 0 1 0 0 0 0 1 1 0 0 1 0 1 0 1 1 0 Inner code of distance ½. Binary code of distance ½. [GI]: ¼ error, not local. [T]: 1/8 error, local.

24
Decoding ABNNR Codes. 1 1 1 0 0 1 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 1 0 1 1 1 0 0

25
Decoding ABNNR Codes. 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 1 1 1 0 0 1 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 1 0 1 1 1 0 0 Decode inner codes. Works if error < ¼. Fails if error > ¼.

26
Decoding ABNNR Codes. 0 0 1 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 1 1 1 0 0 1 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 1 0 1 1 1 0 0 Majority vote on the LHS. [Trevisan]: Corrects 1/8 fraction of errors.

27
GMD decoding [Forney67] 1 0 0 1 1 1 0 0 1 c 2 [0,1] If decoding succeeds, error 2 [0, ¼]. If 0 error, confidence is 1. If ¼ error, confidence is 0. c = (1 – 4 ). Could return wrong answer with high confidence… … but this requires close to ½.

28
GMD Decoding for ABNNR Codes. 1 0 0 c 1 0 0 1 c 2 0 0 0 c 3 0 0 1 c 4 0 1 0 c 5 1 1 1 0 0 1 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 1 0 1 1 1 0 0 GMD decoding: Pick threshold, erase, decode. Non-local. Our approach: Weighted Majority. Thm: Corrects ¼ fraction of errors locally.

29
GMD Decoding for ABNNR Codes. 0 0 1 1 0 0 c 1 0 0 1 c 2 0 0 0 c 3 0 0 1 c 4 0 1 0 c 5 0 1 Thm: GMD decoding corrects ¼ fraction of error. Proof Sketch: 1. Globally, good nodes have more confidence than bad nodes. 2. Locally, this holds for most neighborhoods of vertices on LHS. Proof similar to Expander Mixing Lemma.

30
Outline of This Talk: 1. Amplification via Decoding. 2. Deterministic Local Decoding. 3. Amplification within NP. Finding an inner monotone code [BOKS]. Implementing GMD decoding.

31
The BOKS construction. 101100101100 101100101101100101 k krkr x T(x) T(x) : Sample an r-tuple from x, apply the Tribes function. If x, y are balanced, and (x,y) >, (T(x),T(y)) ¼ ½. If x, y are very close, so are T(x), T(y). Decoding: brute force.

32
GMD Decoding for Monotone codes. 0 0 1 1 0 1 0 c 1 0 1 1 0 c 2 1 1 0 0 c 3 0 1 1 0 c 4 1 0 1 0 c 5 0 1 Start with a balanced f, apply concatenated ABNNR. Inner decoder returns closest balanced message. Apply GMD decoding. Thm: Decoder corrects ¼ fraction of error approximately. Analysis becomes harder.

33
GMD Decoding for Monotone codes. 0 0 1 1 0 1 0 c 1 0 1 1 0 c 2 1 1 0 0 c 3 0 1 1 0 c 4 1 0 1 0 c 5 0 1 Inner decoder finds the closest balanced message. Assume 0 error: Decoder need not return message. Good nodes have few errors, Bad nodes have many. Thm: Decoder corrects ¼ fraction of error approximately.

34
Beyond Unique Decoding… 100110011100110011 Deterministic local list-decoder: Set L of machines such that: - For any received word - Every nearby codeword is computed by some M 2 L. Is this possible?

Similar presentations

OK

Hardness amplification proofs require majority Ronen Shaltiel University of Haifa Joint work with Emanuele Viola Columbia University June 2008.

Hardness amplification proofs require majority Ronen Shaltiel University of Haifa Joint work with Emanuele Viola Columbia University June 2008.

© 2018 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google