Download presentation

Presentation is loading. Please wait.

Published byChristopher Wells Modified over 4 years ago

1
Guy Kindler Weizmann Institute

2
Well try to understand some notions, and their relations: Combinatorial optimization problems Combinatorial optimization problems Approximation: relaxation by semi-definite programs. Approximation: relaxation by semi-definite programs. Integrality gaps Integrality gaps Hardness of approximation Hardness of approximation Main example: the Max-Cut problem About this talk

3
Combinatorial optimization problems Input, search space, objective function Example: MAX-CUT

4
input: G = (V,E) input: G = (V,E) Search space: Partition V=(C, C c ) Search space: Partition V=(C, C c ) Objective function: w(C) = fraction of cut edges Objective function: w(C) = fraction of cut edges The MAX-CUT Problem: Find mc(G)=max C {w(C)} The MAX-CUT Problem: Find mc(G)=max C {w(C)} [Karp 72]: MAX-CUT is NP-complete [Karp 72]: MAX-CUT is NP-complete

5
Example: MAX-CUT input: G = (V,E) input: G = (V,E) Search space: Partition V=(C, C c ) Search space: Partition V=(C, C c ) Objective function: w(C) = fraction of cut edges Objective function: w(C) = fraction of cut edges The MAX-CUT Problem: Find mc(G)=max C {w(C)} The MAX-CUT Problem: Find mc(G)=max C {w(C)} - approximation: Output S, s.t. mc(G) ¸ S ¸ ¢ mc(G). - approximation: Output S, s.t. mc(G) ¸ S ¸ ¢ mc(G). History: ½-approximation easy, was best record for long time. History: ½-approximation easy, was best record for long time.

6
Semi-definite Relaxation Introducing geometry into combinatorial optimization [GW 95]

7
Arithmetization v G=(V,E): xuxu xuxu u xvxv xvxv Problem: We cant maximize quadratic functions, even over convex domains.

8
Relaxation by geometric embedding v G=(V,E): xuxu xuxu u xvxv xvxv Problem: We cant maximize quadratic functions, even over convex domains.

9
Relaxation by geometric embedding v G=(V,E): xuxu xuxu u xvxv xvxv Problem: We cant maximize quadratic functions, even over convex domains.

10
Relaxation by geometric embedding v G=(V,E): xuxu xuxu u xvxv xvxv Problem: We cant maximize quadratic functions, even over convex domains.

11
Relaxation by geometric embedding v G=(V,E): u xuxu xuxu xvxv xvxv Now were maximizing a linear function over a convex domain! (unit sphere in R n ) Semi-definite relaxation

12
(unit sphere in R n ) Relaxation by geometric embedding v G=(V,E): u Is this really a relaxation?

13
Relaxation by geometric embedding (unit sphere in R n ) v G=(V,E): u xuxu xuxu xvxv xvxv Is this really a relaxation?

14
Analysis by randomized rounding xuxu xuxu (unit sphere in R n ) v G=(V,E): u xuxu xuxu xvxv xvxv

15
xuxu xuxu xvxv xvxv Analysis by randomized rounding xuxu xvxv arccos( ) donation to (*) So So (unit sphere in R n )

16
Analysis by randomized rounding xuxu xvxv arccos( ) donation to (*) So So

17
Analysis by randomized rounding L.h.s. is tight, iff all inner products are ρ* So So mc(G) ¸ GW ¢ rmc(G) ¸ GW ¢ mc(G)

18
mc(G) ¸ S ¸ GW ¢ mc(G) An 0.879 approximation for Max-Cut The [GW 95] algorithm: The [GW 95] algorithm: Given G, compute rmc(G) Given G, compute rmc(G) Let S= GW ¢ rmc(G) Let S= GW ¢ rmc(G) Output S. Output S. Is GW the best constant here? L.h.s. is tight, iff all inner products are ρ* mc(G) ¸ GW ¢ rmc(G) ¸ GW ¢ mc(G) Is there a graph where this occurs?

19
Integrality gap Measuring the quality of the geometric approximation [FS 96]

20
The integrality gap of Max-Cut On instance G: rmc(G) mc(G) w S(G) r-S(G)

21
The integrality gap of Max-Cut On instance G: rmc(G) mc(G) w S(G) r-S(G) ¸ GW The [GW 95] algorithm: Given G, output IG ¢ rmc(G) = GW for some G *

22
The integrality gap of Max-Cut On instance G: rmc(G) mc(G) w S(G) r-S(G) The [GW 95] algorithm: Given G, output IG ¢ rmc(G) = GW for some G * Using ¢ rmc(G) to approximate mc(G), the factor =IG cannot be improved! Using ¢ rmc(G) to approximate mc(G), the factor =IG cannot be improved! On G * the algorithm computes mc(G * ) perfectly! On G * the algorithm computes mc(G * ) perfectly!

23
(unit sphere in R q ) The Feige-Schechtman graph, G * rmc(F) ¸ (1- * )/2 rmc(F) ¸ (1- * )/2 [FS]: mc(G * )= arccos(ρ * )/ [FS]: mc(G * )= arccos(ρ * )/ so so Vertices: S n Edges: u~v iff = * arccos(ρ * )

24
From IG to hardness A geometric trick may actually be inherent [KKMO 05]

25
Thoughts about integrality gaps The integrality gap is an artifact of the relaxation. The integrality gap is an artifact of the relaxation. The relaxation does great on an instance for which the integrality gap is achieved. The relaxation does great on an instance for which the integrality gap is achieved. And yet, sometimes the integrality gap is provably * the best approximation factor achievable: And yet, sometimes the integrality gap is provably * the best approximation factor achievable: [KKMO 04]: under UGC, GW is optimal for max-cut. [HK 03], [HKKSW], [KO 06], [ABHS 05], [AN 02] [HK 03], [HKKSW], [KO 06], [ABHS 05], [AN 02] *under some reasonable comlexity theoretic assumptions

26
Thoughts about integrality gaps The integrality gap is an artifact of the relaxation. The integrality gap is an artifact of the relaxation. An algorithm does great on an instance for which the integrality gap is achieved. An algorithm does great on an instance for which the integrality gap is achieved. And yet, sometimes the integrality gap is provably * the best approximation factor achievable: And yet, sometimes the integrality gap is provably * the best approximation factor achievable: [KKMO 04]: Under UGC, GW is optimal for max-cut. [HK 03], [HKKSW], [KO 06], [ABHS 05], [AN 02] [HK 03], [HKKSW], [KO 06], [ABHS 05], [AN 02] And the IG instance G * is used in the hardness proof

27
A recipe for proving hardness Take the instance G * : rmc(G * ) mc(G * ) w S(G * ) r-S(G * )

28
A recipe for proving hardness Add teeth to S(G * ) which achieve rmc(G * ). rmc(G * ) mc(G * ) w r-S(G * ) S(G * )

29
A recipe for proving hardness Now combine several instances, such that finding a point which belongs to all teeth becomes a hard combinatorial optimization problem. rmc(G * ) w mc(G * ) S(G * ) r-S(G * )

30
Now combine several instances, such that finding a point which belongs to all teeth becomes a hard combinatorial optimization problem. A recipe for proving hardness w

31
w Determining whether mc(G`)=mc(G * ) or whether mc(G`)=rmc(G * ) is intractable. Factor of hardness: mc(G * )/rmc(G * )=IG !!

32
Vertices: S n Edges: u~v iff = * Adding teeth to Feige-Schechtman (unit sphere in R q )

33
Adding teeth to Feige-Schechtman (unit sphere in R q ) Vertices: S n Edges: u~v iff = * Vertices: {-1,1} n Edges: u~v iff = * : a random edge (x,y): x~{-1,1} n, E[ ] = ρ E[ ] = ρ y: y i = x i w.p. ½ + ½ρ -x i w.p. ½ - ½ρ

34
Adding teeth to Feige-Schechtman mc = arccos(ρ)/ ? mc = arccos(ρ)/ ? For C(x) = x 7, For C(x) = x 7, w(C) = P edge [x 7 y 7 ] = (1-ρ)/2 !! w(C) = P edge [x 7 y 7 ] = (1-ρ)/2 !!

35
Adding teeth to Feige-Schechtman no influential coordinates mc = arccos(ρ)/ ? mc = arccos(ρ)/ ? For C(x) = x 7, For C(x) = x 7, w(C) = P edge [x 7 y 7 ] = (1-ρ)/2 !! w(C) = P edge [x 7 y 7 ] = (1-ρ)/2 !! For C(x) = sign( x i ) = Maj(x), For C(x) = sign( x i ) = Maj(x), w(C) = P[Maj(x)Maj(y)] (arccos ρ)/π

36
mc = arccos(ρ)/ ? mc = arccos(ρ)/ ? [for axis parallel cut]: w(C) =(1-ρ)/2 [for axis parallel cut]: w(C) =(1-ρ)/2 [MOO 05]: If i, I i (C)<, [MOO 05]: If i, I i (C)<, w(C) (arccos ρ)/π +o (1) w(C) (arccos ρ)/π +o (1) Ratio between weight of teeth cuts and regular cuts is GW (for ρ = ρ*) Ratio between weight of teeth cuts and regular cuts is GW (for ρ = ρ*) Adding teeth to Feige-Schechtman

37
We tried to understand some notions, and their relations: Combinatorial optimization problems Combinatorial optimization problems Approximation: relaxation by semi-definite programs. Approximation: relaxation by semi-definite programs. Integrality gaps Integrality gaps Hardness of approximation Hardness of approximation Conclusion

Similar presentations

OK

Unconstrained Submodular Maximization Moran Feldman The Open University of Israel Based On Maximizing Non-monotone Submodular Functions. Uriel Feige, Vahab.

Unconstrained Submodular Maximization Moran Feldman The Open University of Israel Based On Maximizing Non-monotone Submodular Functions. Uriel Feige, Vahab.

© 2018 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on project financing in india Ppt on forex market Ppt on professional development Ppt on indian culture and tradition free download A ppt on loch ness monster images Ppt on central limit theorem sample Ppt on different types of computer softwares Ppt on ozymandias download Ppt on life and works of william shakespeare Ppt on carburetor system