Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Smoothed Analysis of Algorithms: Why The Simplex Method Usually Takes Polynomial Time Shang-Hua Teng Boston University/Akamai Joint work with Daniel.

Similar presentations


Presentation on theme: "1 Smoothed Analysis of Algorithms: Why The Simplex Method Usually Takes Polynomial Time Shang-Hua Teng Boston University/Akamai Joint work with Daniel."— Presentation transcript:

1

2 1 Smoothed Analysis of Algorithms: Why The Simplex Method Usually Takes Polynomial Time Shang-Hua Teng Boston University/Akamai Joint work with Daniel Spielman (MIT) Gaussian Perturbation with variance  2

3 2 Remarkable Algorithms and Heuristics Work well in practice, but Worst case: bad, exponential, contrived. Average case: good, polynomial, meaningful?

4 3 Random is not typical

5 4 Smoothed Analysis of Algorithms: worst case max x T(x) average case avg r T(r) smoothed complexity max x avg r T(x+  r)

6 5 Smoothed Analysis of Algorithms Interpolate between Worst case and Average Case. Consider neighborhood of every input instance If low, have to be unlucky to find bad input instance

7 6 Complexity Landscape

8 7 Classical Example: Simplex Method for Linear Programming max z T x s.t. A x  y Worst-Case: exponential Average-Case: polynomial Widely used in practice

9 8 CarbsProteinFatIronCost 1 slice bread3051.51030¢ 1 cup yogurt1092.5080¢ 2tsp Peanut Butter6818620¢ US RDA Minimum3005070100 Minimize 30 x 1 + 80 x 2 + 20 x 3 s.t. 30x 1 + 10 x 2 + 6 x 3  300 5x 1 + 9x 2 + 8x 3  50 1.5x 1 + 2.5 x 2 + 18 x 3  70 10x 1 + 6 x 3  100 x 1, x 2, x 3  0 The Diet Problem

10 9 max z T x s.t. A x  y Max x 1 +x 2 s.t x 1  1 x 2  1 -x 1 - 2x 2  1 Linear Programming

11 10 Smoothed Analysis of Simplex Method max z T x s.t. A x  y G is Gaussian max z T x s.t. (A +  G) x  y

12 11 Worst-Case: exponential Average-Case: polynomial Smoothed Complexity: polynomial max z T x s.t. a i T x  ±1, ||a i ||   1 max z T x s.t. (a i +  g i ) T x  ±1 Smoothed Analysis of Simplex Method

13 12 Perturbation yields Approximation For polytope of good aspect ratio

14 13 But, combinatorially

15 14 The Simplex Method

16 15 Simplex Method (Dantzig, ‘47) Exponential Worst-Case (Klee-Minty ‘72) Avg-Case Analysis (Borgwardt ‘77, Smale ‘82, Haimovich, Adler, Megiddo, Shamir, Karp, Todd) Ellipsoid Method (Khaciyan, ‘79) Interior-Point Method (Karmarkar, ‘84) Randomized Simplex Method (m O(  d) ) (Kalai ‘92, Matousek-Sharir-Welzl ‘92) History of Linear Programming

17 16 Shadow Vertices

18 17 Another shadow

19 18 Shadow vertex pivot rule objective start z

20 19 Theorem: For every plane, the expected size of the shadow of the perturbed tope is poly(m,d,1/  )

21 20 Theorem: For every z, two-Phase Algorithm runs in expected time poly(m,d,1/  ) z

22 21 Vertex on a 1,…,a d maximizes z iff z  cone(a 1,…,a d ) 0 a1a1 a2a2 z z A Local condition for optimality

23 22 Polar ConvexHull(a 1, a 2, … a m ) Primal a 1 T x  1 a 2 T x  1 … a m T x  1

24 23

25 24 max   z  ConvexHull(a 1, a 2,..., a m ) z Polar Linear Program

26 25 Initial Simplex Opt Simplex

27 26 Shadow vertex pivot rule

28 27

29 28 Count facets by discretizing to N directions, N  

30 29 Count pairs in different facets Pr Different Facets [] < c/N So, expect c Facets

31 30 Expect cone of large angle

32 31 Angle Distance

33 32 Isolate on one Simplex

34 33 Integral Formulation

35 34 Example: For a and b Gaussian distributed points, given that ab intersects x-axis Prob[  <  ] = O(  2 )  a b

36 35  a b

37 36 a b

38 37 a b

39 38 a b

40 39 a b

41 40 a b

42 41 Claim: For  <   P  <  2  a b

43 42 Change of variables  a b z u v da db = |(u+v)sin(  du dv dz d 

44 43 Analysis: For  <   P  <  2 Slight change in  has little effect on i for all but very rare u,v,z   

45 44 Distance: Gaussian distributed corners a1a1 a2a2 a3a3 p

46 45 Idea: fix by perturbation

47 46 Trickier in 3d

48 47 Future Research – Simplex Method Smoothed analysis of other pivot rules Analysis under relative perturbations. Trace solutions as un-perturb. Strongly polynomial algorithm for linear programming?

49 48 A Theory Closer to Practice Optimization algorithms and heuristics, such as Newton’s Method, Conjugate Gradient, Simulated Annealing, Differential Evolution, etc. Computational Geometry, Scientific Computing and Numerical Analysis Heuristics solving instances of NP-Hard problems. Discrete problems? Shrink intuition gap between theory and practice.


Download ppt "1 Smoothed Analysis of Algorithms: Why The Simplex Method Usually Takes Polynomial Time Shang-Hua Teng Boston University/Akamai Joint work with Daniel."

Similar presentations


Ads by Google