Presentation is loading. Please wait.

Presentation is loading. Please wait.

Oded Goldreich Shafi Goldwasser Dana Ron February 13, 1998 Max-Cut Property Testing by Ori Rosen.

Similar presentations


Presentation on theme: "Oded Goldreich Shafi Goldwasser Dana Ron February 13, 1998 Max-Cut Property Testing by Ori Rosen."— Presentation transcript:

1 Oded Goldreich Shafi Goldwasser Dana Ron February 13, 1998 Max-Cut Property Testing by Ori Rosen

2 Table of Contents 1. Introduction 2. Definitions 3. Graph Partitioning Algorithm 4. Max-Cut Approximation Algorithm 5. Property Testing MC ρ

3 Introduction Definition of the max-cut problem: For a graph, a maximum cut is a cut whose size is not smaller than the size of any other cut. Size cut definition: For graph G, the size of a cut is the number of edges between S, a subset of V(G), and the complementary subset.

4 Introduction We will see a property testing of Max-Cut. First, we are shown an algorithm that “ideally” finds the max-cut by “cheating” with additional information. The above algorithm is then modified to approximate the size of the Max-Cut without further information. Last, we’ll see a usage of the Max-Cut algorithm to test for MC ρ. * We will consider our graphs to be dense (with at least εN 2 edges), the input graph is given in terms of its adjacency matrix, and a query consists of verifying if the edge (u,v) belongs to E(G).

5 Definitions Edge density definition: For a given partition (V 1,V 2 ) of V(G), we define µ(V 1,V 2 ) to be the edge density of the cut defined by (V1,V2) – Let µ(G) denote the edge density of the largest cut in G, meaning it is the largest µ(V 1,V 2 ) taken over all partitions of (V 1,V 2 ) of V(G).

6 Definitions μ(G) = 6/4 2 = 6/16 μ(G) = ?

7 Graph Partitioning Algorithm We will now see a quadratic-time graph partitioning algorithm, with running time : Given a graph G, the algorithm returns a cut with edge density of at least μ(G) – (¾)*ε, with probability at least 1 - δ.

8 Graph Partitioning Algorithm Let, and let (V 1,…,V l ) be a fixed partition of V(G) into l sets of (roughly) equal size. The algorithm constructs a partition (V 1,V 2 ) in l iterations. In the i’th iteration we construct a partition (V 1 i,V 2 i ) of V i.

9 Observation: Let (H 1,H 2 ) be a partition of V(G). Let v in H1 and assume that the number of neighbors of v in H 2 is the same or more than in H 1, meaning – Then, if we move v from H 1 to H 2, we cannot decrease the edge density, but we might increase it – => Graph Partitioning Algorithm The addition to edge density from new edges brought into the cut from moving v to H 2

10 Graph Partitioning Algorithm Expanding on the previous observation, we will see what happens when we move Θ(εN) vertices. In contrast to moving a single vertex, the cut may decrease by O(ε 2 N 2 ) per stage. The algorithm is oracle-aided and will work in O(1/ε) stages. It will be viewed as starting from a partition that matches a Max-Cut, and every stage moves O(εN) vertices. The total decrease in cut size is bounded by O((1/ε)* ε 2 N) = O(εN).

11 Let X be a subset of V(G) of size N/l (assume integer). Let W = V(G) \ X, and let (W 1,W 2 ) be a partition of W induced by (H 1,H 2 ). => Remember, consider (H 1,H 2 ) being the Max-Cut. Assume we know for every vertex x in X, |Γ(x)|. Define X UB (UnBalanced) to be the set of vertices that have “many more” (for example, (ε /8)N) neighbors on one side of the partition than on the other, with respect to (W 1,W 2 ) In contrast, we define X B = X \ X UB to be the set of balanced vertices. Graph Partitioning Algorithm

12 1 5 2 6 3 7 4 8 9 13 10 14 11 15 12 16 SIMPLE!

13 Graph Partitioning Algorithm Assume we partition X into (X 1,X 2 ) in the following way – Vertices in X UB that have more neighbors in W 1 are put in X 2, and vice versa for W 2 and X 1. Vertices in X B are placed arbitrarily in W 1 and W 2. Next we create a new partition – This partition differs from (H 1,H 2 ) only in the placement of vertices in X.

14 Graph Partitioning Algorithm 1 5 2 6 3 7 4 8 9 13 10 14 11 15 12 16

15 Graph Partitioning Algorithm Reminder: => the difference between μ(H 1 ’,H 2 ’) and μ(H 1,H 2 ) is only the change in number of edges between vertices in X and vertices in W, and between pairs of vertices in X. From the construction of X UB, the number of edges crossing the cut between vertices in X UB and W cannot decrease.

16 Graph Partitioning Algorithm Reminder: From the construction of X b, the number of cut edges between X b and W can decrease by at most – The number of cut edges between pairs of vertices in X can decrease by at most -

17 Graph Partitioning Algorithm μ(H 1,H 2 ) = 34/16 2 μ(H 1 ’,H 2 ’) = 32/16 2 1 5 2 6 3 7 4 8 9 13 10 14 11 15 12 16

18 Graph Partitioning Algorithm Let X be V 1, let (H 1,H 2 ) define a Max-cut, and let the resulting partition we received from the process we just saw be defined by (H 1 1,H 2 1 ). Assume we continue this process iteratively. During the i’th iteration, we process V i,given the partition (H 1 i-1,H 2 i-1 ) calculated in iteration i-1. The result from this process is that μ(H 1 l,H 2 l ) is smaller than μ(H 1,H 2 ) = μ(G) by no more than –

19 1 5 2 6 3 7 4 8 9 13 10 14 11 15 12 16 Graph Partitioning Algorithm And so on…

20 1. choose l = sets U 1,…,U l each of size t = Θ(ε -2 · log(εδ) -1 ), where U i is chosen uniformly in V\V i. Let Û =. 2. For each sequence of partitions of π(Û) = (where for each i, (U 1 i,U 2 i ) is a partition of U i ) do: 1. For i = 1…l, partition V i into two disjoints V 1 i and V 2 i as follows: For each v in V i, 1. If then put v in V 2 i. 2. Else put v in V 1 i. 2. Let V 1 π(Û) = V 1 i, and let V 2 π(Û) = V 2 i. 3. Among all partitions (V 1 π(Û), V 2 π(Û) ), created in step (2), let (V 1 π(Û), V 2 π(Û) ) be the one which defines the largest cut, and output it. ~ ~

21 Graph Partitioning Algorithm The algorithm we just saw has one little problem – we don’t know what the Max-Cut is to start from, meaning (H 1 0,H 2 0 ). Because of this, we don’t know if the vertices in V 1 are balanced or not. What we can do, is approximate the number of neighbors v has on each side of (W 1 0,W 2 0 ) by sampling.

22 Graph Partitioning Algorithm We will see that if we uniformly choose a set of vertices U 1 of size t = poly(log(1/ δ)/ε) in W 0 then with high probability over the choice of U 1 there exists a partition (U 1 1,U 2 1 ) of U 1 which is representative with respect to (W 1 0,W 2 0 ) and V 1 : For all but a small fraction of vertices v in V 1, the number of neighbors v has in U 1 1, relative to the size of U 1, is approx the same as the number of neighbors v has in W 1 0, relative to the size of V(G). This approx is good enough, since when placing vertices in V 1, the most important factor is the location of the unbalanced vertices.

23 Graph Partitioning Algorithm If U 1 has a representative partition, then we say that U 1 is good. How do we know which of the 2 t partitions of U 1 is the representative one (if one exists)? Easy – we try them all.

24 1 5 2 6 3 7 4 8 9 13 10 14 11 15 12 16 Graph Partitioning Algorithm

25 1 5 2 6 3 7 4 8 9 13 10 14 11 15 12 16 Graph Partitioning Algorithm And so on…

26 Out of all the partitions of U 1, namely (U 1 1,U 2 1 ), we only need the partition for which - Denote this (hopefully representative partition) by (U 1 1,U 2 1 ). Let (V 1 1,V 2 1 ) be the partition of V 1 which is determined by this partition of U 1. Let (H 1 1,H 2 1 ) be the resulting partition of V(G). => (H 1 1,H 2 1 ) is the same as (H 1 0,H 2 0 ) except for the placement of vertices in V 1, which is as in (V 1 1,V 2 1 ).

27 Graph Partitioning Algorithm If (U 1 1,U 2 1 ) is the representative one (in respect to (W 1 0,W 2 0 ) and V 1 ), then μ(H 1 1,H 2 1 ) is not much smaller than μ(H 1 0,H 2 0 ) = μ(G). Continuing like this, in the i’th stage we randomly pick a set U i, and we determine a partition V i for each of its partitions. => We’re actually constructing (2 t ) l =2 l*t possible partitions of V(G), one for each partition of all the U i ’s.

28 Graph Partitioning Algorithm To show that at least one of these partitions defines a cut “close” to the Max-Cut, we only need to make sure that for each i, with high probability, U i is good with respect to (W 1 i-1,W 2 i-1 ), where the latter partition is determined by the choice of U 1,…,U i-1, and their representative partitions (U 1 1,U 2 1 ),…,(U 1 i-1,U 2 i-1 ). We’ll see a lemma that formalizes the intuition we saw before on why the algorithm works.

29 Graph Partitioning Algorithm Lemma 1: Let (H 1,H 2 ) be a fixed partition of V(G). Then with probability at least (1 - δ/2) over the choice of Û =, there exists a sequence of partitions π(Û), such that : μ( V 1 π( Û), V 2 π( Û) ) ≥ μ(H 1,H 2 ) – ¾*ε. Proof follows.

30 Graph Partitioning Algorithm Lemma Proof: For a given sequence of partitions π(Û), we consider the following l+1 hybrid partitions. The Hybrid (H 1 0,H 2 0 ) is simply (H 1,H 2 ). The i’th hybrid partition, (H 1 i,H 2 i ), has the vertices in V i+1,…,V l partitioned as in (H 1,H 2 ) and the vertices in V 1,…,V i as placed by the algorithm.

31 More precisely, the hybrid partition (H 1 i,H 2 i ) is defined: Where for j in {1,2}, Note that in particular (H 1 l,H 2 l ) is the partition ( V 1 π(Û),V 2 π(Û) ). Since the partition of each V i is determined by the choice of U i and its partition, the i’th hybrid partition is determined by the choice of U 1,…,U i and their partitions, but not by the choice nor the partitions of U i+1,…,U l. and Graph Partitioning Algorithm

32 We shall show that for every 1≤i≤l, for any fixed choice and partitions of U 1,…,U i-1, with probability at least (1 – δ/2*l) over the choice of U i, there exists a partition (U 1 i,U 2 i ) of U i such that:

33 Graph Partitioning Algorithm For the i-1 hybrid partition (H 1 i-1,H 2 i-1 ), or more precisely, for the partition it induces on W i-1, and a sample set U i, let – We say that U i good with respect to (W 1 i-1,W 2 i-1 ) and V i if (U 1 i,U 2 i ) is representative with respect to (W 1 i-1,W 2 i-1 ) and V i.

34 Graph Partitioning Algorithm That is, (U 1 i,U 2 i ) is such that for all but a fraction of ε/8 of the vertices v in V i the following holds: Assume that for each i, the set U i is good with respect to (W 1 i-1,W 2 i-1 ) and V i. As previously defined, we say that a vertex v is unbalanced with respect to (W 1 i-1,W 2 i-1 ) if (*)(*)

35 Graph Partitioning Algorithm Thus, if v in V i is an unbalanced vertex with respect to (W 1 i-1,W 2 i-1 ) for which (*) is satisfied, then – We are then guaranteed that when the partition (U 1 i,U 2 i ) is used then v is put opposite of the majority of its neighbors in W i-1. if v is balanced then it might be placed on either side. The same is true for the (at most εN/8l) vertices for which (*) does not hold.

36 Graph Partitioning Algorithm The decrease in the size of the cut is affected only by the change of edges between V i and W i-1, and between pairs of vertices in V i. In particular: The number of cut edges between unbalanced vertices in V i for which (*) is satisfied and vertices in W i-1 can’t decrease. The number of cut edges between unbalanced vertices in V i for which (*) is not satisfied and vertices in W i-1 decrease by at most (ε/8)*|V i |*2N ≤ εN 2 /4l. The number of cut edges between balanced vertices in V i and vertices in W i-1 decrease by at most |V i |*2* ε N/8 ≤ εN 2 /4l The number of cut edges between balanced vertices in V i decrease by at most |V i | 2 = N 2 /l 2 ≤ εN 2 /4l

37 The total decrease is bounded by 3 εN 2 /4l. It remains to prove that with high probability a chosen set U i is good (with respect to (W 1 i-1,W 2 i-1 ) and V i ). We first fix a vertex v in V i. Let U i = {u 1,…,u t } (Reminder – U i is chosen uniformly in W i-1 = V \ V i ). For j in {1,2}, and for 1≤k ≤t, define a 0/1 random variable, ξ j k, which is 1 if μ k is a neighbor of v and μ k in W j i-1, and is 0 otherwise. Graph Partitioning Algorithm

38 By definition, for each j, the sum of the ξ j k ‘s is simply the number of neighbors v has in U j i (= U i W j i-1 ) and the probability that ξ j k = 1 is (1/N)*|Г(v) W j i-1 |. By an additive Chernoff bound, and our choice of t, for each j in {1,..,k} -

39 Graph Partitioning Algorithm By Markov’s inequality, for each j in {1,2}, with probability at least 1 - δ/4*l over the choice of U i, for all but ε/8 of the vertices in V i, equation (*) holds (for that j), and thus with probability at least 1 - δ/2l, U i is good as required. Applying the lemma 1 to a Max-Cut of G, we get: With probability at least 1 - δ/2 over the choice of Û we have, μ ( V 1 π( Û), V 2 π( Û) ) ≥ μ(G) - ¾*ε, where ( V 1 π(Û),V 2 π(Û) ) is as defined in step 3 of the algorithm. ~~ ~ ~

40 Max-Cut Approx Algorithm Armed with the GPA, the Max-Cut approx algorithm is quite straightforward. We uniformly choose a set S of vertices of size m = Θ((l*t + log(1/δ))/ε 2 ), and run the GPA restricted to this set. Instead of returning the largest cut, the algorithm returns S = {s 1,…,s m }, a multiset of m/2 ordered pairs, {(s 1,s 2 ),…,(s m-1,s m )}, for which there exists a cut that maximizes the number of such pairs that are edges in the cut. This is done for technical reasons.

41 Max-Cut Approx algorithm 1. As step 1 of GPA. 2. Uniformly choose a set S={s1,…,sm} of size m = Θ(l·t+log(1/δ)/ε 2 ). For 1≤i≤l, let S i = V i S. 3. Similar to step 2 of GPA, for each of the sequences of partitions π(Û) =, partition each S i into two disjoint sets S 1 i and S 2 i, and let S j π(Û) = (for j = 1,2). 4. For each partition ( S 1 π(Û),S 2 π(Û) ), compute the fraction of cut edges between pairs of vertices (s 2k-1,s 2k ). More precisely, define – Let ( S 1 π(Û),S 2 π(Û) ) be a partition for which this fraction is maximized, and output μ(S 1 π(Û),S 2 π(Û) ). ^ ~ ~

42 Max-Cut Approx Algorithm Lemma 2: For any fixed Û, with probability at least 1 – δ/2 over the choice of S, μ ( S 1 π(Û),S 2 π(Û) ) = μ ( V 1 π(Û),V 2 π(Û) ) ± ¼*ε, where (S 1 π(Û),S 2 π(Û) ) and μ(·,·) are as defined in step 4 of the Max-Cut approx algorithm. Proof follows. ^ ~ ~~ ~ ~ ~

43 Max-Cut Approx Algorithm Lemma 2 proof: Consider first a particular sequence of partitions, π(Û). The key observation is that for every s in S, and for j in {1,2}, s in S j π(Û)  s in V j π(Û). Thus for each sequence of partitions π(Û) we are effectively sampling from ( V 1 π(Û),V 2 π(Û) ). Furthermore, by viewing S as consisting of m/2 pairs of vertices (s 2k-1,s 2k ), and counting the number of pairs which are on opposite sides of the partition and have an edge in between, we are able to approx the density of the cut edges.

44 Max-Cut Approx Algorithm For 1≤k≤m/2, let ξ k be a 0/1 random variable which is 1 if (s 2k-1,s 2k ) in E(G), and for j≠j’, s 2k-1 in S j π(Û) and s 2k in S j’ π(Û). Then, by definition, μ ( S 1 π(Û),S 2 π(Û) ) = 2/m ∑ k m/2 ξ k, and the probability that ξ k = 1 is μ( V 1 π(Û),V 2 π(Û) ). Hence, by an additive Chernoff bound and our choice of m - ^

45 Max-Cut Approx Algorithm Since there are 2 l*t sequences of partitions of Û, with probability at least 1 – δ/2, for every sequence of partitions π(Û), μ ( S 1 π(Û),S 2 π(Û) ) = μ ( V 1 π(Û),V 2 π(Û) ) ± ε/8, and hence μ ( S 1 π(Û),S 2 π(Û) ) = μ ( V 1 π(Û),V 2 π(Û) ) ± ε/4. ^ ~ ^ ~~ ~

46 Property Testing MC ρ Armed with the Max-Cut approx algorithm, we have in hand a property tester for the class MC ρ. Graphs with cut density ρ: MC ρ = {G : μ(G) ≥ ρ} For example, for ρ = ¼, MC 1/4 is the group of graphs that contain a cut of at least density ¼. Note that for ρ > ½ testing for ρ-cut is trivial. (why?) For every constant 0 ≥ ρ > ½, there exists a property testing algorithm for MC ρ.

47 Property Testing MC ρ - Proof Let and let The testing algorithm runs the Max-Cut approx algorithm algorithm shown earlier, with ε’ and δ as input. Graph G is accepted if and only if. If μ(G) ≥ ρ, then by Max-Cut approx algorithm, G is accepted with pr 1 – δ. If G is accepted with pr > δ, then μ(G) ≥ ρ - 2ε’. This implies that G is ε-close to some G’ in MC ρ.

48 Property Testing MC ρ - Proof Let (V 1,V 2 ) be a partition of V(G) such that μ(V 1,V 2 ) ≥ ρ - 2ε’. => 2|V 1 |*|V 2 | ≥ (ρ - 2ε’)N 2 If 2|V 1 |*|V 2 | ≥ ρN 2 : To obtain G’ we simply add edges between vertices in V 1 and vertices in V 2 until μ(V 1,V 2 ) = ρ (why can we do this?) In this case, dist(G,G’) ≤ 2ε’ < ε.

49 Property Testing MC ρ - Proof Else, meaning 2|V 1 |*|V 2 | < ρN 2 : We cannot obtain G’ by simply adding more edges (why?). Instead, we will move vertices from the larger set (assume V 1 ) to the smaller set (assume V 2 ). Then we will have ”enough room” for the extra edges.

50 Property Testing MC ρ - Proof Assume |V 1 |<|V 2 |, and consider (V 1 ’,V 2 ’) such that and |V 1 ’| has the minimum value such that 2|V 1 ’|*|V 2 ’| ≥ ρN 2. =>|V 1 ’|-|V 1 | ≤ (ε/γ)*N => And now we can proceed adding edges between V 1 ’ and V 2 ’ until we reach the cut density required.

51 The End.


Download ppt "Oded Goldreich Shafi Goldwasser Dana Ron February 13, 1998 Max-Cut Property Testing by Ori Rosen."

Similar presentations


Ads by Google