Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dana Moshkovitz MIT Joint work with Subhash Khot, NYU 1.

Similar presentations


Presentation on theme: "Dana Moshkovitz MIT Joint work with Subhash Khot, NYU 1."— Presentation transcript:

1 Dana Moshkovitz MIT Joint work with Subhash Khot, NYU 1

2 Best Approximations Known For 2CSP vs. 3CSP 23 OR0.931.. [FG95] 7/8 XOR0.878... [GW92] 1/2 AND0.859.. [FG95] 1/2 any better approximation: NP-hard [Håstad97] 2 MAX-CUT: If exists cut with (1- ± ) fraction of edges, efficiently cut at least (1-c  ± ) fraction of edges.

3 1 2  (n) n O(1) exponential polynomial running- time Exponential hardness: time 2 n 1-o(1) Sharp threshold: Drop at ½+o(1) Assuming it takes 2  (n) time to solve 3SAT exactly on size n inputs. 1/2 The Complexity of Approximating 3XOR [=3LIN(GF(2))] [ AroraSafra92, AroraLundMotwaniSudanSzegedy92, Raz94, BellareGoldreichSudan95, Håstad97, MRaz08 ] Approx. factor 3

4 23 OR0.931.. [FG95] 7/8 XOR0.878... [GW92] 1/2 AND0.859.. [FG95] 1/2 Best Approximations Known For 2CSP vs. 3CSP any better approximation: NP-hrd [Håstad97] Any better approximation: NP-hard assuming the Unique Games Conjecture [K02,KKMO04,R08] 4 Hard time showing easy problems are hard!

5 Proving Hardness of Constraint Satisfaction Problems 5 The Bellare-Goldreich-Sudan-Håstad paradigm 2CSP(GF(q)) desired problem composition with long- code/dictator code [Khot02]: Start with 2LIN(GF(q)). The Unique Games Conjecture: 2LIN(GF(q)) is extremely hard to approximate.

6 The Unique Games Conjecture (UGC) - The Hardness of 2LIN Over Large Alphabets 6 Unique Games Conjecture [Khot02, formulation by KKMO04] : For all ,δ>0, for sufficiently large q, given a set of linear equations of the form x i – x j = c (mod q) where  1-δ fraction of the equations can be satisfied, it is NP-hard to find an assignment that satisfies   fraction of the equations. [Raghavendra08]: Assuming the Unique Games Conjecture, the best approximation for all CSPs is obtained by rounding a natural SDP.

7 Alphabet/Hardness Rules of Thumb: As alphabet gets larger, problem gets harder. Important Exception: problem can (but not necessarily) become easy if alphabet is R : linear/semidefinite programming can be used. Typically, Easy, if allow fractional solutions Hard, if encode large discrete alphabets (e.g., exact 3LIN( R ) [GR07]) 7

8 A natural optimization problem: find a balanced assignment for real homogeneous linear equations: SDP-based algorithm: Best approximation known obtained by rounding natural SDP (when 1-  equations exactly satisfiable, get margin O(  ) for  (1) of equations), – Unlike GF(q): Same for 2 or 3 variables per equation! We show NP-hardness for 3 vars per equation! Approach to proving the Unique Games Conj. 8 x 15 - x 231 + x 37 = 0 x 1 - 2x 3 + x 89 = 0 ...  Work with Subhash Khot, 2010: Margin of equation ax+by+cz = 0 is |ax+by+cz|

9 Main Theorem Theorem [KM10]: For any ± >0, it is NP-hard, given a system of linear equations over reals where (1- ± ) fraction can be exactly satisfied by a balanced assignment, to find an assignment where 0.99 fraction of the equations have margin at most 0.0001 ¢  ±. Tight: Efficient algorithm gives margin O(  ) for  (1) of equations. Blow up: Reduction from SAT has blow-up n  n poly(1/  ), matching the recent result of Arora, Barak, Steurer. 9

10 Approach to Proving The Unique Games Conjecture 1.Show that approximate 3LIN over the reals is NP-hard. 2.Show that approximate 2LIN over the reals is NP-hard, possibly by reduction from 3LIN. 3.Deduce the Unique Games Conjecture using parallel repetition. 10

11 Approximation Algorithm Min E[| ax + by + cz | 2 ] s.t. assignment is balanced Analysis & Rounding. Assume ( 1-  ) equations exactly satisfiable. Then the SDP minimum is at most O(  ). Solve the SDP to get vector assignment. Pick random Gaussian ³, get real assignment x i = with similar value. The margin is  (  ) for constant fraction of equations. SDP 11

12 Hardness of Approximately Solving Real Linear Equations with Three Variables Per Equation 12

13 Proving Hardness of Approximate Real Equations 1.Dictatorship test over reals. 2.Adapting the paradigm 13 The Bellare-Goldreich-Sudan-Håstad paradigm 2CSP(GF(q)) desired problem composition with long- code/dictator code dictatorship test for desired problem

14 Real Functions Variables correspond to points in R n. Assignments correspond to functions R n  R. We consider the Gaussian distribution over R n, where each coordinate has mean 0 and variance 1. Fact: If x,y are independent Gaussians, then px+qy is Gaussian where p 2 +q 2 = 1. 14

15 F F Dictator Testing Tester Function F: R n  R. Two possibilities for F. A tester queries three positions x,y,z  R n, tests aF(x)+bF(y)+cF(z) = 0 ? If dictator, equation satisfied exactly with probability  1- . If not approximated by linear junta, there is a margin  0.001   with probability  0.01. 15

16 Noise Sensitivity Approach to Dictator Testing Pick Gaussian x 2 R n. Perturb x by re-sampling each coordinate with probability ±. Obtain x’. Check F(x)=F(x’). 16

17 Problem with Noise Sensitivity Approach For F half-space, – With probability 1-c  ±, equality holds. – With probability c  ±, constant margin. We want: with probability ¸ 0.01, margin ¸ 0.001  ±. 17

18 Linear Non-Juntas Are Noise-Sensitive Observation: If F =  i 2I a i x i for | I | À 1/ ± 2, then (F(x) – F(x’)) is Gaussian with variance ¸  ± . Hence |F(x)-F(x’)| is ¸ 0.001  ± with prob ¸ 0.01. Basic Idea: test linearity and noise sensitivity. 18

19 Hermite Analysis – Fourier Anlysis for Real Functions Fact: For any F: R n  R with bounded |f| 2, can write F =  c i 1,…, i n ¢ H i 1 (x 1 ) ¢ … ¢ H i n (x n ), where H d is a polynomial (“Hermite polynomial”) of degree d. Notation: F · 1 is the linear part of F, and F >1 (x) is its non-linear part. 19

20 Linearity Testing Pick Gaussian x,y 2 R n, Pick p 2 [0,1]; set q=  (1-p 2 ). Check F(px+qy)=pF(x)+qF(y). Lemma: |F-F · 1 | 2 2 · E[Margin 2 ]. Proof: Via Hermite analysis. 20 Equation depends on three variables!

21 Decompose into linear and non-linear part: E x |F(x)-F(x’)| 2 = E x |F · 1 (x)-F · 1 (x’)| 2 + E x |F >1 (x)-F >1 (x’)| 2 Non junta ) with prob 0.1 margin ¸ 0.1  Linearity test ) | F >1 (x )| 2 · 0.001  Cancellation problem: On average | F >1 (x )- F >1 (x ’)| · 0.001 , but it can be much larger when |F · 1 (x)-F · 1 (x’)| is large and cancel |F · 1 (x)-F · 1 (x’)| ! 21

22 Cancelations Don’t Arise! coordinate-wise perturbation x » c x’: x’ is obtained from x by re-sampling with prob ±. random perturbation x » r x’: x’ is obtained from x as px+qy for p=(1- ± ), p 2 +q 2 =1 Cancelation: P x » c x’ [|F >1 (x)-F >1 (x’)| 2 ¸ 0.1  ] ¸ 0.1 By linearity testing: P x » r x’ [|F >1 (x)-F >1 (x’)| 2 · 0.01  ] ¸ 0.99 We’ll show that they cannot co-exist! 22

23 Cancelations Don’t Arise! coordinate-wise perturbation x » c x’: x’ is obtained from x by re-sampling with prob ±. random perturbation x » r x’: x’ is obtained from x as px+qy for p=(1- ± ), p 2 +q 2 =1 Cancelation: P x » c x’ [|F >1 (x)-F >1 (x’)| 2 ¸ 0.1  ] ¸ 0.1 By linearity testing: P x » r x’ [|F >1 (x)-F >1 (x’)| 2 · 0.01  ] ¸ 0.99 We’ll show that they cannot co-exist! 23 Claim: For any G: R n  R, E x » r x’ |G(x)-G(x’)| 2 ¸ E x » c x’ |G(x)-G(x’)| 2.

24 Main Lemma: From Small Difference to Constant Difference Main Lemma: If for H: R n  R, P x » c x’ [|H(x)-H(x’)| 2 ¸ A ¢ T] ¸ 10 ®, P x » r x’ [|H(x)-H(x’)| 2 · T] ¸ 1- ®, Then, there is Boolean B: R n  {0,1} where P x » c x’ [|B(x)-B(x’)| 2 = 1] ¸ 0.01, P x » r x’ [|B(x)-B(x’)| 2 = 0] ¸ 1-1/A, 24 Proof: Combinatorial, by considering “perturbation graphs”, and constructing an appropriate cut.

25 Thank you! 25

26 Proof of Main Lemma Perturbation graphs: Our task: Find a cut in C µ R n that cuts: E c edges of weight ¸ 0.09. E r edges of weight · 0.02. 26

27 Cutting The Gaussian Space Claim: There is a distribution D over cuts s.t.: Every edge e 2 E c is cut with prob ¸ 0.1 . Every edge e 2 E r is cut with prob · 0.01  -1/21/2 H(x)H(x’) 27

28 From Small Cuts To Large Cuts In previous lemma, cut only ¼  weight. Want to cut constant weight. 28

29 Larger Cut Let m=11/ . Sample from D cuts C 1,…,C m. Pick random Iµ [m]. Let C =  i 2I C i. Claim: Edge e 2 E c is cut with prob ¸ 0.5 ¢ (1-1/e 2 ). Edge e 2 E r is cut with prob  0.01  ¢ 10/  0.1  Corr: A cut with 0.2 ¢ 0.1 weight of E c and 0.8 ¢ 0.99 weight of E r. 29


Download ppt "Dana Moshkovitz MIT Joint work with Subhash Khot, NYU 1."

Similar presentations


Ads by Google