Chapter 10: Iterative Improvement Simplex Method The Design and Analysis of Algorithms.

Slides:



Advertisements
Similar presentations
February 14, 2002 Putting Linear Programs into standard form
Advertisements

Duality for linear programming. Illustration of the notion Consider an enterprise producing r items: f k = demand for the item k =1,…, r using s components:
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
Lecture 3 Linear Programming: Tutorial Simplex Method
Introduction to Algorithms
Chapter 6 Linear Programming: The Simplex Method
The Simplex Method The geometric method of solving linear programming problems presented before. The graphical method is useful only for problems involving.
Dr. Sana’a Wafa Al-Sayegh
1 Introduction to Linear Programming. 2 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. X1X2X3X4X1X2X3X4.
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc
Computational Methods for Management and Economics Carla Gomes Module 6a Introduction to Simplex (Textbook – Hillier and Lieberman)
Dragan Jovicic Harvinder Singh
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 10 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Linear Inequalities and Linear Programming Chapter 5
Operation Research Chapter 3 Simplex Method.
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
Design and Analysis of Algorithms
Chapter 10: Iterative Improvement
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 99 Chapter 4 The Simplex Method.
Optimization Linear Programming and Simplex Method
Chapter 4 The Simplex Method
MIT and James Orlin © Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm.
LINEAR PROGRAMMING SIMPLEX METHOD.
Learning Objectives for Section 6.2
The Stable Marriage Problem
Chapter 6 Linear Programming: The Simplex Method
8. Linear Programming (Simplex Method) Objectives: 1.Simplex Method- Standard Maximum problem 2. (i) Greedy Rule (ii) Ratio Test (iii) Pivot Operation.
Chapter 6 Linear Programming: The Simplex Method Section 2 The Simplex Method: Maximization with Problem Constraints of the Form ≤
Topic III The Simplex Method Setting up the Method Tabular Form Chapter(s): 4.
Barnett/Ziegler/Byleen Finite Mathematics 11e1 Learning Objectives for Section 6.4 The student will be able to set up and solve linear programming problems.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Chapter 10 IterativeImprovement. Iterative Improvement Algorithm design technique for solving optimization problems b Start with a feasible solution b.
All Pair Shortest Path IOI/ACM ICPC Training June 2004.
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
4  The Simplex Method: Standard Maximization Problems  The Simplex Method: Standard Minimization Problems  The Simplex Method: Nonstandard Problems.
Chapter 4 Linear Programming: The Simplex Method
Chapter 6 Linear Programming: The Simplex Method Section 4 Maximization and Minimization with Problem Constraints.
Design & Co-design of Embedded Systems Final Project: “The Match Maker” Second Phase Oral Presentation Safa S. Mahmoodian.
Bipartite Matching. Unweighted Bipartite Matching.
The Stable Marriage Problem
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
MIT and James Orlin © Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm file Simplex2_AMII_05a_gr.
An-Najah N. University Faculty of Engineering and Information Technology Department of Management Information systems Operations Research and Applications.
Network Flows Chun-Ta, Yu Graduate Institute Information Management Dept. National Taiwan University.
1 Introduction to Linear Programming. 2 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. X1X2X3X4X1X2X3X4.
Simplex Method Simplex: a linear-programming algorithm that can solve problems having more than two decision variables. The simplex technique involves.
Graph Algorithms Maximum Flow - Best algorithms [Adapted from R.Solis-Oba]
Linear Programming: Formulations, Geometry and Simplex Method Yi Zhang January 21 th, 2010.
Copyright © 2006 Brooks/Cole, a division of Thomson Learning, Inc. Linear Programming: An Algebraic Approach 4 The Simplex Method with Standard Maximization.
1 Optimization Linear Programming and Simplex Method.
1 Simplex algorithm. 2 The Aim of Linear Programming A Linear Programming model seeks to maximize or minimize a linear function, subject to a set of linear.
Decision Support Systems INF421 & IS Simplex: a linear-programming algorithm that can solve problems having more than two decision variables.
(iii) Simplex method - I D Nagesh Kumar, IISc Water Resources Planning and Management: M3L3 Linear Programming and Applications.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
The Simplex Method. and Maximize Subject to From a geometric viewpoint : CPF solutions (Corner-Point Feasible) : Corner-point infeasible solutions 0.
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
IOI/ACM ICPC Training 4 June 2005.
Chapter 10 Iterative Improvement
The minimum cost flow problem
The Simplex Method The geometric method of solving linear programming problems presented before. The graphical method is useful only for problems involving.
Linear programming Simplex method.
Various Graph Algorithms
Chapter 3 The Simplex Method and Sensitivity Analysis
The Simplex Method The geometric method of solving linear programming problems presented before. The graphical method is useful only for problems involving.
Linear programming Simplex method.
Chapter 10: Iterative Improvement
Chapter 2. Simplex method
Presentation transcript:

Chapter 10: Iterative Improvement Simplex Method The Design and Analysis of Algorithms

2 Iterative Improvement  Introduction  Linear Programming  The Simplex Method  Standard Form of LP Problem  Basic Feasible Solutions  Outline of the Simplex Method  Example  Notes on the Simplex Method  Improvements

3 Introduction Algorithm design technique for solving optimization problems Start with a feasible solution Repeat the following step until no improvement can be found:  change the current feasible solution to a feasible solution with a better value of the objective function Return the last feasible solution as optimal

4 Introduction Note: Typically, a change in a current solution is “small” (local search) Major difficulty: Local optimum vs. global optimum

5 Important Examples Simplex method Ford-Fulkerson algorithm for maximum flow problem Maximum matching of graph vertices Gale-Shapley algorithm for the stable marriage problem

6 Linear Programming Linear programming (LP) problem is to optimize a linear function of several variables subject to linear constraints: maximize (or minimize) c 1 x c n x n subject to a i1 x a in x n ≤ (or ≥ or =) b i, i = 1,...,m, x 1 ≥ 0,..., x n ≥ 0 The function z = c 1 x c n x n is called the objective function; constraints x 1 ≥ 0,..., x n ≥ 0 are called non-negativity constraints

7 Example maximize 3x + 5y subject to x + y ≤ 4 x + 3y ≤ 6 x ≥ 0, y ≥ 0 Feasible region is the set of points defined by the constraints

8 Geometric solution maximize 3x + 5y subject to x + y ≤ 4 x + 3y ≤ 6 x ≥ 0, y ≥ 0 Extreme Point Theorem Any LP problem with a nonempty bounded feasible region has an optimal solution; moreover, an optimal solution can always be found at an extreme point of the problem's feasible region.

9 Possible outcomes in solving an LP problem has a finite optimal solution, which may not be unique unbounded: the objective function of maximization (minimization) LP problem is unbounded from above (below) on its feasible region infeasible: there are no points satisfying all the constraints, i.e. the constraints are contradictory

10 The Simplex Method Simplex method is the classic method for solving LP problems, one of the most important algorithms ever invented Invented by George Dantzig in 1947 (Stanford University) Based on the iterative improvement idea: Generates a sequence of adjacent points of the problem’s feasible region with improving values of the objective function until no further improvement is possible

11 Standard form of LP problem must be a maximization problem all constraints (except the non-negativity constraints) must be in the form of linear equations all the variables must be required to be nonnegative Thus, the general linear programming problem in standard form with m constraints and n unknowns (n ≥ m) is maximize c 1 x c n x n subject to a i1 x a in x n = b i,, i = 1,...,m, x 1 ≥ 0,..., x n ≥ 0 Every LP problem can be represented in such form

12 Example maximize 3x + 5y maximize 3x + 5y + 0u + 0v subject to x + y ≤ 4 x + y + u = 4 x + 3y ≤ 6 x + 3y + v = 6 x≥0, y≥0 x≥0, y≥0, u≥0, v≥0 Variables u and v, transforming inequality constraints into equality constrains, are called slack variables

13 Basic feasible solutions A basic solution to a system of m linear equations in n unknowns (n ≥ m) is obtained by setting n – m variables to 0 and solving the resulting system to get the values of the other m variables. The variables set to 0 are called nonbasic; the variables obtained by solving the system are called basic. A basic solution is called feasible if all its (basic) variables are nonnegative. Example x + y + u = 4 x + 3y + v = 6 (0, 0, 4, 6) is basic feasible solution (x, y are nonbasic; u, v are basic)

14 Simplex Tableau maximize z = 3x + 5y + 0u + 0v subject to x + y + u = 4 x + 3y + v = 6 x≥0, y≥0, u≥0, v≥0

15 Outline of the Simplex Method Step 0 [Initialization] Present a given LP problem in standard form and set up initial tableau. Step 1 [Optimality test] If all entries in the objective row are nonnegative —stop: the tableau represents an optimal solution. Step 2 [Find entering variable] Select (the most) negative entry in the objective row. Mark its column to indicate the entering variable and the pivot column.

16 Outline of the Simplex Method Step 3 [Find departing variable] For each positive entry in the pivot column, calculate the θ-ratio by dividing that row's entry in the rightmost column by its entry in the pivot column. (If there are no positive entries in the pivot column — stop: the problem is unbounded.) Find the row with the smallest θ-ratio, mark this row to indicate the departing variable and the pivot row. Step 4 [Form the next tableau] Divide all the entries in the pivot row by its entry in the pivot column. Subtract from each of the other rows, including the objective row, the new pivot row multiplied by the entry in the pivot column of the row in question. Replace the label of the pivot row by the variable's name of the pivot column and go back to Step 1.

17 Example of Simplex Method basic feasible sol. (0, 0, 4, 6) z = 0 basic feasible sol. (0, 2, 2, 0) z = 10 basic feasible sol. (3, 1, 0, 0) z = 14 maximize z = 3x + 5y + 0u + 0v subject to x + y + u = 4 x + 3y + v = 6 x≥0, y≥0, u≥0, v≥0

18 Notes on the Simplex Method  Finding an initial basic feasible solution may pose a problem  Theoretical possibility of cycling  Typical number of iterations is between m and 3m, where m is the number of equality constraints in the standard form. Number of operations per iteration: O(nm)  Worse-case efficiency is exponential

19 Improvements L. G. Khachian introduced an ellipsoid method (1979) that seemed to overcome some of the simplex method's limitations. O(n 6 ). Disadvantage – runs with the same complexity on all problems Narendra K. Karmarkar of AT&T Bell Laboratories proposed in1984 a new very efficient interior-point algorithm - O(n 3.5 ). In empirical tests it performs competitively with the simplex method.

Unweighted Bipartite Matching

Definitions Matching Free Vertex

Definitions Maximum Matching: matching with the largest number of edges

Definition Note that maximum matching is not unique.

Intuition Let the top set of vertices be men Let the bottom set of vertices be women Suppose each edge represents a pair of man and woman who like each other Maximum matching tries to maximize the number of couples!

Alternating Path Alternating between matching and non-matching edges. a b cde f g hij d-h-e: alternating path a-f-b-h-d-i: alternating path starts and ends with free vertices f-b-h-e: not alternating path e-j: alternating path starts and ends with free vertices

Idea “Flip” augmenting path to get better matching Note: After flipping, the number of matched edges will increase by 1! 

Idea Theorem (Berge 1975): A matching M in G is maximum iff There is no augmenting path Proof: (  ) If there is an augmenting path, clearly not maximum. (Flip matching and non-matching edges in that path to get a “better” matching!)

Idea of Algorithm Start with an arbitrary matching While we still can find an augmenting path  Find the augmenting path P  Flip the edges in P

Labelling Algorithm Start with arbitrary matching

Labelling Algorithm Pick a free vertex in the bottom

Labelling Algorithm Run BFS

Labelling Algorithm Alternate unmatched/matched edges

Labelling Algorithm Until a augmenting path is found

Augmenting Tree

Flip!

Repeat Pick another free vertex in the bottom

Repeat Run BFS

Repeat Flip

Answer Since we cannot find any augmenting path, stop!

Overall algorithm Start with an arbitrary matching (e.g., empty matching) Repeat forever  For all free vertices in the bottom, do bfs to find augmenting paths  If found, then flip the edges  If fail to find, stop and report the maximum matching.

Time analysis We can find at most |V| augmenting paths To find an augmenting path, we use bfs! Time required = O( |V| + |E| ) Total time: O(|V| 2 + |V| |E|)

Improvement We can try to find augmenting paths in parallel for all free nodes in every iteration. Using such approach, the time complexity is improved to O(|V| 0.5 |E|)

Jan, 2006"The Match Maker" 44 The Stable Marriage Problem Consider the following information: ► a collection of eligible men, each with a rank ordered list of to be husband ► a collection of eligible women, each with a rank ordered list of to be wife Assume that: ► the number of men and women are equal ► all men rank all women, and vice versa ► rank ordered lists are in decreasing order of preference

Jan, 2006"The Match Maker" 45 The Stable Marriage Problem (cont’d) We define a marriage arrangement to be a pairing of the men and women such that: ► each man is paired with one woman and ► every person appears in exactly one pair. We say that a marriage arrangement is stable if there does not exist a woman X and man Y such that: ► X prefers Y over X's husband, and ► Y prefers X over Y's wife. Note: If there would exist such people, then they would want to run off together, creating an unstable situation.

Jan, 2006"The Match Maker" 46 The Stable Marriage Problem (cont’d) The problem is: “Given the preference rankings of the men and women, construct a stable marriage arrangement.”

Jan, 2006"The Match Maker" 47 The Stable Marriage Problem The method For solving this problem works something like this: ► The first man will propose to the first woman on his list. ► She having no better offer at this point, will accept. ► The second man will then propose to his first choice, and so on. ► Eventually it may happen that a man proposes to a woman who already has a partner.

Jan, 2006"The Match Maker" 48 The Stable Marriage Problem (cont’d) ► She will compare the new offer to her current partner and will accept whoever is higher on her list. ► The man she rejects will then go back to his list and propose to his second choice, third choice, and so forth until he comes to a woman who accepts his offer. ► If this woman already had a partner, her old partner gets rejected and he in turn starts proposing to women further down his list. ► Eventually everything gets sorted out.

Jan, 2006"The Match Maker" 49 The Stable Marriage Problem (cont’d) Algorithm:  Each person starts with no people "canceled" from his or her list.  People will be canceled from lists as the algorithm progresses.  For each man m, do propose(m), as defined below.

Jan, 2006"The Match Maker" 50 The Stable Marriage Problem (cont’d) propose(m):  Let W be the first uncanceled woman on m's preference list.  Do: refuse(W,m), as defined in the next slide. refuse(W,m):  Let m' be W 's current partner (if any).  If W prefers m' to m, then she rejects m, in which case: cancel m off W 's list and W off m's list; do: propose(m). (Now m must propose to someone else.)  Otherwise, W accepts m as her new partner, in which case: cancel m' off W 's list and W off m' 's list; do: propose(m'). (Now m' 's must propose to someone else.)

Jan, 2006"The Match Maker" 51 The Stable Marriage Problem (cont’d) How do we know the final arrangement will be stable? Suppose there exists a man Y and woman X such that they prefer each other over their own spouses. Since Y prefers X over his own spouse, then he must have proposed to her before proposing to his own spouse. So there are two possibilities of what happened at the time of that proposal. Either: ► X rejected Y, which means that she was already engaged to someone she preferred over Y (she had scratched Y off her list). Therefore, she must either be married to that person or someone else who was on her list at the time of the proposal. Therefore, she could not prefer Y to her own spouse, contradicting our assumption. ► X accepted Y, but dumped him later. Since she would have dumped him only in order to become engaged to someone higher ranked on her list, she must not prefer Y over her own spouse, again contradicting the assumption.