Presentation is loading. Please wait.

Presentation is loading. Please wait.

Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2.

Similar presentations


Presentation on theme: "Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2."— Presentation transcript:

1 Algorithms Lecture 10 Lecturer: Moni Naor

2 Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2 … c d ¢ x d Subject to: a 1,1 ¢ x 1 + a 1,2 ¢ x 2 … a 1,d ¢ x d · b 1 a 2,1 ¢ x 1 + a 2,2 ¢ x 2 … a 2,d ¢ x d · b 1... a n,1 ¢ x 1 + a n,2 ¢ x 2 … a n,d ¢ x d · b n n – number of constraints and d – number of variables or dimension

3 Linear Programming in Two Dimensions Feasible region Optimal vertex

4 What is special in low dimension Only d constraints determine the solution –The optimal value on those d constraints determine the global one –Problem is reduced to finding those constraints that matter –We know that equality hold in those constraints Generic algorithms: –Fourier-Motzkin: (n/2) 2 d –Worst case of Simplex: number of bfs/vertices ( n d )

5 Key Observation If we know that an inequality constraints is defining: –we can reduce the number variables Projection Substitution Feasible region Optimal vertex 4X 1 - 6x 2 =4

6 B Incremental Algorithm B Input : A set of n constraints H on d variables Output : the set of defining constraints B 0. If |H|=d output B (H)=H 1.Pick a random constraint If h 2 H B recursively find B (H \ h) BB 2. If B (H \ h) does not violate h output B (H \ h) else project all the constraints onto h and recursively solve this (n-1,d-1) lp program

7 Correctness, Termination and Analysis Correctness: by induction… Termination: if the non defining constraints chosen –No need to rerun Analysis: probability that h is one the defining constraints is d/n B 0. If |H|=d output B (H)=H 1.Pick a random constraint If h 2 H B recursively find B (H \ h) B B 2. If B (H \ h) does not violate h output B (H \ h) else project all the constraints onto h and recursively solve this (n-1,d-1) lp program

8 Analysis Analysis: probability that h is one the defining constraints is d/n T(d,n) = d/n T(d-1,n-1) + T(d,n-1) by induction = d/n (d-1)! (n-1) + d! (n-1) = (n-1)d!(1/n +1) · nd! B 0. If |H|=d output B (H)=H 1.Pick a random constraint If h 2 H B recursively find B (H \ h) B B 2. If B (H \ h) does not violate h output B (H \ h) else project all the constraints onto h and recursively solve this (n-1,d-1) lp program

9 How to improve The algorithm is wasteful: When the solution does not fit the new a new is computed from scratch B 0. If |H|=d output B (H)=H 1.Pick a random constraint If h 2 H B recursively find B (H \ h) B B 2. If B (H \ h) does not violate h output B (H \ h) else project all the constraints onto h and recursively solve this (n-1,d-1) lp program

10 Random Sampling idea Build the basis by adding the constraints in a manner related to history Input : A set of n constraints H on d variables Output : the set of defining constraints 0. If |H|= c d 2 return simplex on H S Ã  Repeat Pick random R ½ H of size r Solve recursively on S [ R solution is u V = set of constraints in H violated by u If |V| · t, then S Ã S [ V Until V= 

11 Correctness, Termination and Analysis Claim : Each time we augment S (S Ã S [ V), we add to S a new constraint from the ``real” basis B of H –If u did not violate any constraint in B it would be optimal –So V must contain an element from B which was not in S before Since |B|=d, we can augment S at most d times Therefore the number of constraints in the recursive call is |R|+|S| · r +dt Important factor for analysis: what is the probability of successful augmentation

12 Sampling Lemma For any H and S ½ H The expected (over R ) number of constraints V that violate u (optimum on S [ R ) is at most nd/r Proof Let X(R,h) be 1 iff h violated h( S [ R) Need to bound E R [  h X(R,h)] = 1 / #R  |R|=r  h X(R,h) instead consider all subsets Q = R [ h of size r+1 = 1 / #R  |Q|=r+1  h 2 Q X(Q\{h},h) = (#Q/#R) (r+1) ¢ Prob Q,h 2 Q X(Q\{h},h) · n ¢ d / (r+1)

13 Analysis Setting t= 2 nd /r implies (from Markov’s inequality): –number of recursive call until a successful (V · t) augmentation is constant Number of constraints in recursive call bounded by r+O(d 2 n/r) Setting r=d n 1/2 means that this is O(r) Total expected running time T(n) · 2 d T(d n 1/2 ) + O(d 2 n) Result O( (log n) log d (Simplex time) ) + O(d 2 n) Can be improved to O(d d +d 2 n) Can be improved to O(d d 1/2 +d 2 n) using [Kalai, Matousek-Sharir-Welzl]

14 References Motwani and Raghavan, Randomized Algorithms Chapter 9.10 Michael Goldwasser, A Survey of Linear Programming in Randomized Subexponential Time http://euler.slu.edu/~goldwasser/publications/SIGACT1995_Abstract.html Pioter Indyk’s course at MIT, Geometric Computation http://theory.lcs.mit.edu/~indyk/6.838/ Applet: http://web.mit.edu/ardonite/6.838/linear-programming.htm http://web.mit.edu/ardonite/6.838/linear-programming.htm


Download ppt "Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2."

Similar presentations


Ads by Google