MIT and James Orlin © 2003 1 Dynamic Programming 2 –Review –More examples.

Slides:



Advertisements
Similar presentations
IEOR 4004 Final Review part II.
Advertisements

1 Material to Cover  relationship between different types of models  incorrect to round real to integer variables  logical relationship: site selection.
BU Decision Models Integer_LP1 Integer Optimization Summer 2013.
Linear Programming (LP) (Chap.29)
Types of Algorithms.
Greed is good. (Some of the time)
MIT and James Orlin © Dynamic Programming 1 –Recursion –Principle of Optimality.
MIT and James Orlin © Game Theory 2-person 0-sum (or constant sum) game theory 2-person game theory (e.g., prisoner’s dilemma)
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
MIT and James Orlin © Stochastic Dynamic Programming –Review –DP with probabilities.
Deterministic Selection and Sorting Prepared by John Reif, Ph.D. Analysis of Algorithms.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
MIT and James Orlin © Nonlinear Programming Theory.
HW2 Solutions. Problem 1 Construct a bipartite graph where, every family represents a vertex in one partition, and table represents a vertex in another.
Branch and Bound Searching Strategies
1 Tardiness Models Contents 1. Moor’s algorithm which gives an optimal schedule with the minimum number of tardy jobs 1 ||  U j 2. An algorithm which.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Midterm 3 Revision Prof. Sin-Min Lee Department of Computer Science San Jose State University.
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
1 Dynamic Programming Jose Rolim University of Geneva.
MIT and James Orlin © Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm.
15.082J and 6.855J and ESD.78J November 2, 2010 Network Flow Duality and Applications of Network Flows.
Advanced Algorithms Piyush Kumar (Lecture 5: Weighted Matching) Welcome to COT5405 Based on Kevin Wayne’s slides.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Types of IP Models All-integer linear programs Mixed integer linear programs (MILP) Binary integer linear programs, mixed or all integer: some or all of.
MIT and James Orlin © Chapter 5. Sensitivity Analysis presented as FAQs –Points illustrated on a running example of glass manufacturing. –Also the.
1 LINEAR PROGRAMMING Introduction to Sensitivity Analysis Professor Ahmadi.
Dynamic Programming Sequence of decisions. Problem state. Principle of optimality. Dynamic Programming Recurrence Equations. Solution of recurrence equations.
MIT and James Orlin1 NP-completeness in 2005.
1 1 Slide © 2000 South-Western College Publishing/ITP Slides Prepared by JOHN LOUCKS.
Engineering Systems Analysis for Design Richard de Neufville © Massachusetts Institute of Technology Dynamic Programming Slide 1 of 35 Dynamic Programming.
C&O 355 Mathematical Programming Fall 2010 Lecture 18 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
Thursday, May 16 Review of Handouts: Lecture Notes.
1 The Dual in Linear Programming In LP the solution for the profit- maximizing combination of outputs automatically determines the input amounts that must.
Chapter 18: Searching and Sorting Algorithms. Objectives In this chapter, you will: Learn the various search algorithms Implement sequential and binary.
Maximum Flow Problem (Thanks to Jim Orlin & MIT OCW)
Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.
Tuesday, April 30 Dynamic Programming – Recursion – Principle of Optimality Handouts: Lecture Notes.
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Thursday, May 2 Dynamic Programming – Review – More examples Handouts: Lecture Notes.
15.082J and 6.855J and ESD.78J The Successive Shortest Path Algorithm and the Capacity Scaling Algorithm for the Minimum Cost Flow Problem.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Introduction to Integer Programming Integer programming models Thursday, April 4 Handouts: Lecture Notes.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 18.
1 4.7 TIME ALIGNMENT AND NORMALIZATION Linear time normalization:
Solving IPs – Implicit Enumeration Similar to Binary IP Branch and Bound General Idea: Fixed variables – those for which a value has been fixed. Free Variable.
Dynamic Programming Sequence of decisions. Problem state.
Water Resources Planning and Management Daene McKinney
CS 3343: Analysis of Algorithms
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Introduction to Operations Research
The Knapsack Problem.
Linear Programming.
Chapter 6. Large Scale Optimization
Instructor: Shengyu Zhang
Unit 4: Dynamic Programming
Advanced Algorithms Analysis and Design
Greedy Algorithms TOPICS Greedy Strategy Activity Selection
LINGO LAB 3/4.
Advanced LP models column generation.
Advanced LP models column generation.
Lecture 4 Dynamic Programming
Dynamic Programming II DP over Intervals
Dynamic Programming 動態規劃
Part II General Integer Programming
Advance Algorithm Dynamic Programming
Chapter 6. Large Scale Optimization
Presentation transcript:

MIT and James Orlin © Dynamic Programming 2 –Review –More examples

MIT and James Orlin © Match game example Suppose that there are 20 matches on a table, and the person who picks up the last match wins. At each alternating turn, my opponent or I can pick up 1, 2 or 6 matches. Assuming that I go first, how can I be sure of winning the game?

MIT and James Orlin © Determining the strategy using DP n = number of matches left (n is the state/stage) g(n) = 1 if you can force a win at n matches. g(n) = 0 otherwise g(n) = optimal value function. At each state/stage you can make one of three decisions: take 1, 2 or 6 matches. g(1) = g(2) = g(6) = 1 (boundary conditions) g(3) = 0; g(4) = g(5) = 1. The recursion: g(n) = 1 if g(n-1) = 0 or g(n-2) = 0 or g(n-6) = 0; g(n) = 0 otherwise. Equivalently, g(n) = 1 – min (g(n-1), g(n-2), g(n-6)).

MIT and James Orlin © Winning and Losing Positions

MIT and James Orlin © Principle of Optimality Any optimal policy has the property that whatever the current state and decision, the remaining decisions must constitute an optimal policy with regard to the state resulting from the current decision. Whatever node j is selected, the remaining path from j to the end is the shortest path starting at j.

MIT and James Orlin © Capital Budgeting, again Investment budget = $14,000

MIT and James Orlin © Stages, and the DP recursion Solve the knapsack problem as a sequence of problems At stage k we consider whether to add item k or not to the knapsack. Let f(k,B) be the best NPV limited to stocks 1, 2, …, k only and using a budget of at most B. Knapsack problem via DP

MIT and James Orlin © Solving the Stockco Problem At stage 1 compute for each budget from 0 to $14,000 compute the best NPV while limited to stock 1. At stage 2 compute for each budget from 0 to $14,000 compute the best NPV while limited to stocks 1 and 2. At stage 3 compute for each budget from 0 to $14,000 compute the best NPV while limited to stocks 1, 2 and 3.

MIT and James Orlin © Capital budgeting in general Max  j=1..n c j x j s.t  j=1..n a j x j  b x binary Let f(k, B) = Max  j=1..k c j x j s.t  j=1..k a j x j  B x binary

MIT and James Orlin © The recursion f(0,0) = 0; f(0,k) is undefined for k > 0 f(k, B) = min ( f(k-1, B), f(k-1, B-a k ) + c k ) either item k is included, or it is not The optimum solution to the original problem is max { f(n, B) : 0  B  b }. Note: we solve the capital budgeting problem for all right hand sides less than b.

MIT and James Orlin © The Knapsack Problem Is there a feasible solution to 31 x + 35 y + 37 z = 422 x, y, z are non-negative integers. let f(1, j) = T if there is a solution to 31 x = j, f(1, j) = F if there is no solution to 31 x = j. where x is a non-negative integer let f(2, j) = T if there is a solution to 31 x + 35 y = j, x and y are non-negative integer

MIT and James Orlin © The Knapsack Problem Is there a feasible solution to 31 x + 35 y + 37 z = 422 x, y, z are non-negative integers. let f(3,j) = T if there is a solution to 31 x + 35 y + 37 z = j, x, y, and z are non-negative integer Note: there are three stages to this problem.

MIT and James Orlin © The recursion for the knapsack problem Is there a feasible solution to 31 x = 422 x non-negative integer? f(0,0) = T, f(0,k) = F for k > 0 f(1,k) = T if f(0,k) = T or f(1, k-31) = T for k = 31 to 422 f(1,k) = F otherwise. Does this really work?

MIT and James Orlin © The recursion for the knapsack problem Is there a feasible solution to 31 x + 35 y = 422 x, y are non-negative integers. f(2,k) = T if f(1,k) = T or f(2, k-35) = T for k = 0 to 422 f(2,k) = F otherwise

MIT and James Orlin © The recursion for the knapsack problem Is there a feasible solution to 31 x + 35 y + 37 z = 422 x, y, z are non-negative integers? f(3,k) = T if f(2,k) = T or f(3, k-37) = T f(3,k) = F otherwise

MIT and James Orlin © The DP again, without stages Is there a feasible solution to 31 x + 35 y + 37 z = 422 x, y, z are non-negative integers. let g(j) = T if there is a solution to 31 x + 35 y + 37 z = j, x, y, and z are non-negative integer Then g(0) = T. For j > 0, g(j) is true if i.g(j-31) is true or ii.g(j-35) is true or iii. g(j-37) is true

MIT and James Orlin © Optimal Capacity Expansion: What is the least cost way of building plants? YearCum. DemandCost per plant in $millions Cost of $15 million in any year in which a plant is built. At most 3 plants a year can be built

MIT and James Orlin © Cost of Building plants

MIT and James Orlin © Step 1. For each year, identify those table entries that are feasible, taking into account the lower bounds and the fact that at most 3 plants can be built per year. (Y, k) is a state denoting having k plants with Y years to go.

MIT and James Orlin © Step 2. f(Y, k) = remaining cost of capacity expansion starting with k plants with Y years to go. Fill in the entries for 0 years to go. Then fill in the entries for 1 year to go. Then fill in entries for 2 years to go, and continue.

MIT and James Orlin © Determining state and stage Stage: Y = years to go State: number of plants f(j, Y) = optimum cost of capacity expansion starting with j plants with Y years to go We want to compute f(0, 6). What is f(j, 0) for different j?

MIT and James Orlin © The recursion Let g(Y, k) be the cost of building k plants with Y years to go. Let l(Y) be the minimum number of plants needed with Y years to go. Let M(Y) be the maximum number of plants that can be built with Y years to go. Let f(Y,k) be the remaining cost of capacity expansion starting with k plants with Y years to go. What is f(0,k)? Can you compute f(Y,k) assuming that f(Y-1, j) is already computed for each j?

MIT and James Orlin © Summary for Dynamic Programming Recursion Principle of optimality states and stages and decisions useful in a wide range of situations –games –shortest paths –capacity expansion –more will be presented in next lecture.

MIT and James Orlin © Dynamic Programming Review Next lecture: DP under uncertainty. Applications –meeting a financial goal (a very simplified version) –getting a plane through enemy territory