CS420 The knapsack problem wim bohm, CS CSU 1. 2 Search and Discrete Optimization Discrete Optimization Problem (S,f) – S: space of feasible solutions.

Slides:



Advertisements
Similar presentations
Introduction to Algorithms 6.046J/18.401J/SMA5503
Advertisements

Linear Inequalities in 2 Variables
Linear Programming, 1 Max c 1 *X 1 +…+ c n *X n = z s.t. a 11 *X 1 +…+ a 1n *X n  b 1 … a m1 *X 1 +…+ a mn *X n  b m X 1, X n  0 Standard form.
Linear Programming (LP) (Chap.29)
1 Constraint Satisfaction Problems A Quick Overview (based on AIMA book slides)
1 Chapter 11 Here we see cases where the basic LP model can not be used.
Optimization. f(x) = 0 g i (x) = 0 h i (x)
Part II General Integer Programming II.1 The Theory of Valid Inequalities 1.
Ch 2. 6 – Solving Systems of Linear Inequalities & Ch 2
1 © 2010 Pearson Education, Inc. All rights reserved © 2010 Pearson Education, Inc. All rights reserved Chapter 8 Systems of Equations and Inequalities.
Linear Programming Special Cases Alternate Optimal Solutions No Feasible Solution Unbounded Solutions.
Protein Encoding Optimization Student: Logan Everett Mentor: Endre Boros Funded by DIMACS REU 2004.
Objectives: Set up a Linear Programming Problem Solve a Linear Programming Problem.
1 Integrality constraints Integrality constraints are often crucial when modeling optimizayion problems as linear programs. We have seen that if our linear.
LP formulation of Economic Dispatch
Daniel Kroening and Ofer Strichman Decision Procedures An Algorithmic Point of View Deciding ILPs with Branch & Bound ILP References: ‘Integer Programming’
Operations Research Models
Decision Procedures An Algorithmic Point of View
Simplex method (algebraic interpretation)
2.8 – Graph Linear Inequalities in Two Variables A linear inequality in two variables can be written in one of these forms: Ax + By < C Ax + By > C An.
Systems of Inequalities in Two Variables Sec. 7.5a.
Notes Over 2.6 Checking Solutions of Inequalities Check whether the given ordered pairs are solutions of the inequality.
Section 4-1: Introduction to Linear Systems. To understand and solve linear systems.
Section 7.1 Solving Linear Systems by Graphing. A System is two linear equations: Ax + By = C Dx + Ey = F A Solution of a system of linear equations in.
Linear Equations in Two Variables A Linear Equation in Two Variables is any equation that can be written in the form where A and B are not both zero.
Linear Programming Advanced Math Topics Mrs. Mongold.
Solving Linear Inequalities Lesson 5.5 linear inequality: _________________________________ ________________________________________________ solution of.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
M3 1.5 Systems of Linear Inequalities M3 1.5 Systems of Linear Inequalities Essential Questions: How can we write and graph a system of linear inequalities.
A TUTORIAL ON SUPPORT VECTOR MACHINES FOR PATTERN RECOGNITION ASLI TAŞÇI Christopher J.C. Burges, Data Mining and Knowledge Discovery 2, , 1998.
Copyright © 2011 Pearson Education, Inc. Systems of Linear Equations in Two Variables Section 5.1 Systems of Equations and Inequalities.
click to start Example: A LINEAR TRANSFORMATION.
LINEAR PROGRAMMING 3.4 Learning goals represent constraints by equations or inequalities, and by systems of equations and/or inequalities, and interpret.
1 Math Pacing Graphing Linear Equations. Equations as Relations A linear equation can be written in the standard form of a linear equation: Ax + By =
Instructional Design Document Simplex Method - Optimization STAM Interactive Solutions.
Sullivan Algebra and Trigonometry: Section 12.9 Objectives of this Section Set Up a Linear Programming Problem Solve a Linear Programming Problem.
Algebra 1 Section 7.6 Solve systems of linear inequalities The solution to a system of linear inequalities in two variable is a set of ordered pairs making.
Introduction to Algorithms: Brute-Force Algorithms.
Why isn’t (1, 1) a solution of the system of linear inequalities y > -x and y > x +1?
CS4234 Optimiz(s)ation Algorithms L2 – Linear Programming.
Discrete Optimization
Linear Inequalities in One Variable
Algebra 1 Section 6.5 Graph linear inequalities in two variables.
Optimization Problems
Copyright © 2014, 2010, 2007 Pearson Education, Inc.
Systems of Equations and Inequalities
6-6 Systems of Linear Inequalities
Chapter 2 Equations and Inequalities in One Variable
Linear Inequalities in Two Variables
§ 9.4 Linear Inequalities in Two Variables and Systems of Linear Inequalities.
Copyright © 2014, 2010, 2007 Pearson Education, Inc.
Chapter 3 Graphs and Functions
Algebra: Equations and Inequalities
3-3 Optimization with Linear Programming
Solution Solution Checking Solutions of Inequalities
Algebra: Graphs, Functions, and Linear Systems
Objective Graph and solve systems of linear inequalities in two variables.
Max Z = x1 + x2 2 x1 + 3 x2  6 (1) x2  1.5 (2) x1 - x2  2 (3)
Linear Inequalities in Two Variables
Systems of Two-Variable Inequalities -- Chapter 3-3
1.5 Linear Inequalities.
Linear Programming Example: Maximize x + y x and y are called
Linear Algebra Lecture 32.
Unit 1 Representing Real Numbers
LINEARPROGRAMMING 4/26/2019 9:23 AM 4/26/2019 9:23 AM 1.
4 minutes Warm-Up Solve and graph. 1) 2).
Chapter 8 Systems of Equations
Graph Linear Inequalities in Two Variables
Essential Question: How can we determine if the model for a function should be discrete or continuous?
Section 9.4 Graphing Linear Inequalities in Two Variables and Systems of Linear Inequalities.
Presentation transcript:

CS420 The knapsack problem wim bohm, CS CSU 1

2 Search and Discrete Optimization Discrete Optimization Problem (S,f) – S: space of feasible solutions (satisfying some constraints) – f : S  R function that maps all feasible solutions on the real numbers – Objective: find an optimal solution x opt such that f(x opt )<= f(x) for all x in S or f(x opt )>= f(x) for all x in S – Search application domains planning and scheduling VLSI layout pattern recognition

Integer Linear Problems Given a vector of variables x and a set of linear constraints on these variables, expressed as a matrix vector inequality Ax <= b optimize (minimize or maximize) a linear cost function in these variables, expressed as a vector inner product cx. 3

4 Example: Knapsack problem Is a 0/1 ILP problem Given – n objects - each with weight w i and profit p i – a knapsack with capacity M Determine a subset of the objects such that – total weight  M – total profit maximal

Knapsack is an ILP In the case of the knapsack problem the variables x i can only take values 0 and 1, there is only one constraint Wx <= M the goal is to maximize Px where W and P are two size n vectors of positive numbers. 5

Example W = P = M = 10 6

Example W = P = M = 10 Choice vector? 7

Example W = P = M = 10 Choice vector: Profit: 19 8

9 State space for knapsack State space is a tree – Information stored per state 1. Capacity left. 2. Profit gathered. 3. Objects taken Root: level 0: no object has been considered At level i – Consider object i – Two possible states take object don’t take object Size of the state space – exponential no object object 1 object 2

state space State stores Capacity Left, Profit Gathered. – Possibly: Objects taken so far. At level 0 (the root of the state space) no object has been considered. To go from level i to level i+1, consider object i. There are two possible next states: – take object i, or – do not take object i. Size state space is exponential in n. 10

11 Divide and Conquer (DivCo) solution Exhaustive recursive search technique Hopelessly inefficient, but simple knapdc (M, i) // M:capacity left, i:level, return optimal profit from level i down if (M < W[i]) if (i<n) return knapdc(M,i+1) // don’t take else return 0 else if (i < n) take = knapdc(M-W[i], i+1) + P[i] leave = knapdc(M,i+1) return max(take,leave) else return P[i];

12 Dynamic Programming Solution Go through the state space bottom-up – select 1 object, then 2 objects,......, all objects – very much like make change – use solutions of smaller sub-problem to efficiently compute solutions of larger one i-1 1..i objects considered ….. S o S1S1 S 0 : profit if object i not taken: V i-1 [x] S 1 : profit if object i taken: V i-1 [x-Wi] + Pi V i [x] = max(S0,S1) 0  x  M

13 Dynamic programming main loop PrevV = vector[0..M], all elements 0; V = vector[0..M]; for i = 1 to n { // number of objects for j = 0 to M { // capacity if (j < W[i]) V[j] = PrevV[j] else S0 = PrevV[j]; S1 = PrevV[j-W[i]] + P[i]; V[j] = max[S0,S1]; } PrevV = V; } return V[M]

Solution / choice Vector Notice that this solution does not return the choice vector, only the optimal profit value. In order to recreate the choice vector, the whole table T of vectors needs to be kept. After it is filled the choice vector can be retrieved backwards. – Starting at T[n,M] the choice (take or don't take) for object n can be determined, and also which element of the previous vector needs to be examined. – Very much like LCS 14

Example (table transposed) W = P = M = 10 cap obj

Example W = P = M = 10 cap obj :

Example W = P = M = 10 cap obj : :

Example W = P = M = 10 cap obj : : :

Example W = P = M = : : : take 4 19

Example W = P = M = : : not 3 1: take 4 20

Example W = P = M = take 1 1: take 2 1: not 3 1: take 4 21

DP and bands Band: area in a column vector in DP array where value does not change – How many bands in column 1? – How many in col 2, at most in col 2 ? – How would you represent a band? Observation: you don’t need to store value for all capacities, just for bands – Faster and less storage – Diminishing returns – Still worth it 22

Example W = P = M = : : :

Example W = P = M = : : : ,7

Example W = P = M = : : critical points: 4,8 6,9 9,15 10,17 1: critical points: 1,4 4,8 5,12 7,13 9,15 10, ,7 9,15 4,8

Critical points Critical points from i-1 to i: V i (j)=max(V i-1 (j),V i-1 (j-w i )+P i ) V i-1 (j) previous point set 26

Critical points Critical points from i-1 to i: V i (j)=max(V i-1 (j),V i-1 (j-w i )+P i ) V i-1 (j) previous point set V i-1 (j-w i )+P i previous point set shifted up P i shifted right w i 27 w i,p i

Critical points Critical points from i-1 to i: V i (j)=max(V i-1 (j),V i-1 (j-w i )+P i ) V i-1 (j) previous point set V i-1 (j-w i )+P i previous point set shifted up P i shifted right w i max: take top points 28 w i,p i dominated point

Further space optimization Keeping the whole table is space consuming. Optimization: only keep the middle column and determine the point on the backwards path in that middle column. Then recursively recompute two sub problems with in total half the size of the original, so the time complexity is at most twice the time complexity of the original algorithm, but space complexity is now O(M) instead od O(M*n). See K. Nibbelink, S. Rajopadhye, R. M. McConnell, Knapsack on hardware: a complete solution, 18th IEEE International ASAP Conference 2007, Montreal, July 8-11,

Further time optimization The dynamic programming solution creates more elements of the table than it uses. Eg it uses only 1 element of the last column, and only 2 of the one but last. We can partially build the dynamic programming table ``on demand'' in a top down fashion. We initialize the table with eg -1, and only evaluate a table element if we need it and it has not been computed yet. This technique is called memoization. 30

31 Bounds for knapsack Strengthening / simplifying assumption (like SSS) – objects sorted according to profit/weight ratio – it's smarter to take the diamond than the silverwork In a certain state, with a certain capacity left – lwb = sum of profit of objects with highest profit/weight ratio, s.t sum of corresponding weights does not exceed capacity. lwb is achievable. – upb = lwb + fraction of profit of next object s.t. capacity left = zero. upb is not achievable if an actual fraction is taken. But it can be used to prune the search space.

Lwb Upb lwb is the GREEDY Solution – Computing the greedy solution is fast: O(n) Question: is the greedy solution optimal? 32

Lwb Upb lwb is the GREEDY Solution – Computing the greedy solution is fast: O(n) Question: is the greedy solution optimal? – No – counter example shows it: Capacity M = 7, W = [5, 4, 3], P = [10, 7, 5] – but LWB and UPB together can be used to prune the search space 33

34 Branch and Bound: B&B Branching rule – How to move to new states from a current state Bounds – How to compute an upper bound U, lower bound L in state S Selection rule – Which state to branch to next DEPTH first, BREADTH first, BEST first Parallel approach: take a number of branches at the same time Elimination rule – Infeasible - cannot lead to a solution – Bounded - eliminate S if upper bound of S < current best lower bound (best solution so far for 0/1 ILP problem) – Dominated - eliminate S if it is DOMINATED by another state Q

35 Depth First Branch & Bound (DFBB) Searching the state space – Traverse the state space depth first (left to right) – Maintain the best solution BS = best LWB found so far – Before going into a new sub-tree Compute upper bound Upb of that sub-tree If Upb < BS, prune that sub-tree (do not evaluate the tree) Why does it work? – DFBB is much faster than Div&Co

36 B&B for knapsack Branch – Two new states: take next object or don’t take Bound – Obtained by Greedy solution Select – Breadth first allows other options to be discovered soon Eliminate...

B&B eliminate Eliminate – Infeasible: Weight > Capacity – Bounded: Upb < BS (current global lower bound) – BFS allows for elimination of dominated nodes: Two states S and Q on the same level in the search tree, Q dominates S if it has the same or more capacity left and more profit gathered 37

38 Knapsack Example Capacity M = 7, Number of objects n = 3 W = [5, 4, 3] P = [10, 7, 5] (ordered in P/W ratio) CL : Capacity Left PG : Profit Generated L,U : Bounds of the state CL:7, PG:0 L:10, U:13.5 CL:7, PG:0 L:12, U:12 CL:2, PG:10 L:10, U:13.5 CL:7, PG:0 L:5, U:5 bounded CL:3, PG:7 L:12, U:12 CL:2, PG:10 L:10, U:13.3 infeasible CL:3, PG:7 L:7, U:7 bounded CL:0, PG:12 L:12, U:12 Optimum infeasible CL:2, PG:10 L:10, U:10 leaf

Knapsack B&B optimizations When taking an object you don’t need to recompute upb and lwb, same as SSS If we don’t take we can subtract Pi from upb and add Wi to capLeft and restart the upb computation where we left off last time If capLeft < weight each of the remaining objects stop computing (similar to SSS) 39

Smart Domination Domination – If (capLeft1 > capLeft2 and profit1 > profit2) kill task2 – Naïve implementation compares all pairs: O(#tasks 2 ) 1 : if no tasks at level i dominate each other, what about all takes? What about all don’t takes? 2: if the tasks are sorted by decreasing capLeft, last task in queue Cl,Pl, next task considered Cn,Pn We know Cn<=Cl so if Pn<Pl kill Cn,Pn Keeping queue sorted: merge while doing the dominance test, now is O(#tasks) 40