Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dictator tests and Hardness of approximating Max-Cut-Gain Ryan O’Donnell Carnegie Mellon (includes joint work with Subhash Khot of Georgia Tech)

Similar presentations


Presentation on theme: "Dictator tests and Hardness of approximating Max-Cut-Gain Ryan O’Donnell Carnegie Mellon (includes joint work with Subhash Khot of Georgia Tech)"— Presentation transcript:

1

2 Dictator tests and Hardness of approximating Max-Cut-Gain Ryan O’Donnell Carnegie Mellon (includes joint work with Subhash Khot of Georgia Tech)

3 Talk outline 1.Constraint satisfaction problems and hardness of approximation 2.Dictator Tests & “Slightly Dictator” Tests 3.A new Slightly Dictator Test and hardness of approximation result for the Max-Cut-Gain problem.

4 Talk outline 1.Constraint satisfaction problems and hardness of approximation 2.Dictator Tests & “Slightly Dictator” Tests 3.A new Slightly Dictator Test and hardness of approximation result for the Max-Cut-Gain problem

5 Constraint Satisfaction Problems Let  be a class of predicates (“constraints”) on a few bits; e.g., “ X © Y © Z = b ” “ X  Y ” The “Max-  ” constraint satsifaction problem: Given m predicates/constraints over n variables, find assignment satisfying as many as possible. Max-3Lin Max-Cut Max-2SAT

6 Approximating CSPs “A is a (c, s)-approximation algorithm for Max-  ”: Given an instance where you can satisfy ¸ c fraction of constraints, A outputs a solution satisfying ¸ s fraction of constraints. A should run in polynomial time.

7 Approximating CSPs Gaussian Elimination is a (1, 1)-approximation algorithm for Max-3Lin Best known (1 − , s)-approximation for Max-3Lin is a trivial algorithm with s = ½ – output all 0’s or all 1’s. (A (½, ½)-approximation.) Goemans and Williamson ’95 gave a very famous approximation algorithm for Max-Cut, which is a -approximation and also a (c, s)-approximation for every s <.878c. G&W is a (.51,.45)-approximation for Max-Cut, worse than trivial (the Greedy algorithm is a (½, ½)-approximation algorithm) Charikar and Wirth ’04 gave a ( ½ + , ½ +  (  /log(1/  )) )- approximation for Max-Cut. (A “Max-Cut-Gain” algorithm.)

8 Hardness of approximation PCP (“Probabilistically Checkable Proofs”) technology used to prove NP-hardness of (c,s)-approximation algorithms. Håstad ’97: (1 − , ½ +  )-approximating Max-3Lin is NP-hard. Håstad ’97: (1, 7/8 +  )-approximating Max-3SAT is NP-hard. KKMO ’04 + MOO ’05: Doing any better than the Goemans-Williamson approximation algorithm is NP-hard*. * Assuming the “Unique Games Conjecture”.

9 Hardness of approximation PCP hardness of approximation rule of thumb: “To prove hardness of (c, s)-approximating Max- , it suffices to give a “(c, s)-Slightly-Dictator-Test” where the test is from .”

10 Talk outline 1.Constraint satisfaction problems and hardness of approximation 2.Dictator Tests & “Slightly Dictator” Tests 3.A new Slightly Dictator Test and hardness of approximation result for the Max-Cut-Gain problem

11 Dictators We will be considering m-bit boolean functions: Function f is called a “Dictator” if it is projection to one coordinate: for some 1 · i · m. (AKA “Singleton” AKA “Long Code”)

12 Dictator Testing In the field of “Property Testing”, unknown f given as a black box. Want to determine if f belongs to some class of functions C. Want to query f on as few strings as possible. (Constantly many.) Clearly, must use randomization, must admit some chance of error. For hardness-of-approximation, the relevant C is the class of all m Dictator functions.

13 Testing Dictators A (non-adaptive) Dictator Test: Picks x 1, …, x q 2 {0,1} m in some random fashion. Picks a ‘predicate’  on q bits. Queries f (x 1 ), …, f (x q ). Says “YES” or “NO” according to  (f (x 1 ), …, f (x q )). Each f : {0,1} m ! {0,1} has some probability of “passing” the test. Hope: probability is large for dictators, and small for non-dictators.

14 Correlation If f and g are “highly correlated” – i.e., they agree on almost all inputs – then the probability they pass will be essentially the same. So if g is highly correlated with a Dictator, we can’t help but let it pass with high probability. (A number between −1 and 1.)

15 Basic Dictator Testing If f is a Dictator, passes with probability 1. If f has correlation < 1 −  with every Dictator, passes with probability at most 1 −  (  ). Number of queries q should be an absolute constant. (Like 6 or something.) (Remark 1: Given such a test, you can get a “standard” Dictator Test by repeating O(1/  ) times and saying “YES” iff all tests pass. Remark 2: ) “Assignment tester” (of exponential length) [Din06])

16 Examples Bellare-Goldreich-Sudan ’95: O(1) queries. Håstad ’97 probably gave a 3-query one (he at least could’ve). A 3-query one; if you know Fourier, proof is easy homework ex.: with probability ½ do the BLR test: pick x, y uniformly, and set z = x © y test that f (x) © f (y) © f (z) = 0 with probability ½ do the NAE test: for each i = 1…m, choose (x i, y i, z i ) uniformly from {0,1} 3 n { (0,0,0), (1,1,1) } test that f (x), f (y), f (z) not all equal x y z

17 Hardness of approximation PCP hardness of approximation rule of thumb: “To prove hardness of (c, s)-approximating Max- , it suffices to give a “(c, s)-Slightly-Dictator-Test” where the test is from .”

18 (c, s)-Slightly-Dictator-Tests If f is a Dictator, passes with probability ¸ c. If f has correlation <  with every Dictator (and Dictator-negation), then f passes with probability < s +  0, where  0 ! 0 as  ! 0. (“If f passes with high enough prob., it’s slightly Dictatorial.”) (For PCP purposes, you can sometimes even get away with “Very-Slightly-Dictator-Tests”…)

19 Talk outline 1.Constraint satisfaction problems and hardness of approximation 2.Dictator Tests & Slightly Dictator Tests 3.A new Slightly Dictator Test and hardness of approximation result for the Max-Cut-Gain problem

20 Max-Cut Slightly-Dictator-Tests For Max-Cut, you need a 2-query Slightly-Dictator-Test, where the tests are of the form “f (x)  f (y)”. KKMO ’04 proposed the Noise Sensitivity test: Pick x 2 {0,1} m uniformly, form y 2 {0,1} m by flipping each bit independently with probability . Test f (x)  f (y). Theorem (conj’d by KKMO, proved in MOO ’05): This is a ( , arccos(1−2  )/  )-Very-Slightly-Dictator-Test.

21 Corollaries  = 1 −  : Gives -hardness * for Max-Cut  : Gives ( ,.74)-hardness * for Max-Cut (.878-gap)  = ½ +  : Gives (½ + , ½ + (2/  )  )-hardness * for Max-Cut The first two are best possible, as Goemans and Williamson gave matching algorithms. Last doesn’t match ( ½ + , ½ +  (  /log(1/  )) )- approximation algorithm of Charikar and Wirth. Our goal: give matching hardness.

22 A new result Subhash Khot and I improved the hardness result to match Charikar and Wirth, by analyzing a new Dictator Test: Do the Noise Sensitivity test some fraction of time with  1, and some fraction of the time with  2, balanced so that Dictators pass w.p. ½ + . Gives a ( ½ + , ½ +  (  /log(1/  )) )-Slightly-Dictator-Test using  tests. Bonuses: It’s a Slightly-Dictator-Test (not Very-Slightly-). Unlike MOO ’05, after doing the usual Fourier analysis stuff, the proof is about 10 lines rather than 10 pages.

23 Main technical analysis First, rename bits to −1 and 1, rather than 0 and 1. Next, do the usual Fourier analysis stuff… Let f : {−1,1} m ! {−1,1} be any function, and say it has correlation c i with the i th Dictator function, i = 1…m. Let L : {−1,1} m ! R be the function: L(x 1, …, x m ) = c 1 ¢ x 1 + c 2 ¢ x 2 + ¢ ¢ ¢ + c m ¢ x m This gives the linear polynomial over R that f “looks most like”.

24 Main technical analysis L(x 1, …, x m ) = c 1 ¢ x 1 + c 2 ¢ x 2 + ¢ ¢ ¢ + c m ¢ x m  2 : =  c i 2 (  2 roughly measures how Dictatorial f is.) Probability f : {−1,1} m ! {−1,1} passes the test is (essentially) equal to:

25

26

27

28

29

30 Main technical analysis L(x 1, …, x m ) = c 1 ¢ x 1 + c 2 ¢ x 2 + ¢ ¢ ¢ + c m ¢ x m  2 : =  c i 2 Conclusion: If all correlations c i are small, the distribution of L looks like a Gaussian. With variance =  2.

31 Gaussian facts The probability that a Gaussian random variable with variance 1 goes above t is about exp(−t 2 / 2). By scaling, the probability that a Gaussian with variance  2 goes above t is about exp(−t 2 / 2  2 ). So the probability that a Gaussian with variance  2 goes above 2 is about exp(−2/  2 ). If  2 ¸ 10/ln(1/  ), we have Pr x [L(x) > 2] ¸  1/5.

32 Main technical analysis L(x 1, …, x m ) = c 1 ¢ x 1 + c 2 ¢ x 2 + ¢ ¢ ¢ + c m ¢ x m  2 : =  c i 2 If all correlations c i are small, then: If  2 ¸ 10/ln(1/  ), we have Pr x [L(x) > 2] ¸  1/5 ) ( ½ + , ½ +  (  /log(1/  )) )-Slightly-Dictator-Test

33 Open problem Suppose you want a 3-query (1, s)-(Very)-Slightly-Dictator-Test Till recently, best s was Håstad’s 3/4. Khot & Saket ’06 got s down to 20/27. Conjectured (by Zwick) best s: 5/8 (!). I’m pretty sure I know the test, but I can’t analyze it…

34 The End


Download ppt "Dictator tests and Hardness of approximating Max-Cut-Gain Ryan O’Donnell Carnegie Mellon (includes joint work with Subhash Khot of Georgia Tech)"

Similar presentations


Ads by Google