Download presentation

Presentation is loading. Please wait.

Published byMarisol Spratt Modified over 3 years ago

1
0 “Applications” of knapsack problems 10 September 2002

2
1 “Applications” of knapsack problems 1.What the hell is a knapsack? 2.Knapsack cover inequalities; lifting; surrogate duality and other boring definitions 3.Knapsacks and infeasibility 4.Knapsacks and (bounded) general integer variables

3
2 A knapsack problem is simply an integer programme with a single constraint Max cx Subject to: ax ≤b x B n Suppose you have a knapsack, with size b, and a bunch of items with size a i, and value c i, and wish to put the most value into the knapsack… … and can somehow mash the items so that they fit whenever the sum of their volumes fit…

4
3 Pure binary knapsack: Max cx Subject to: ax ≤b x B n Bounded knapsack: Max cx Subject to: ax ≤b 0≤x≤u x Z n … they come in several flavours Unbounded knapsack: Max cx Subject to: ax ≤b x≥0 x Z n Bounded mixed integer knapsack: Max cx+dy Subject to: ax+zy ≤b x≥0 x Z n y R m Bounded knapsacks are our focus here, though most results are or appear to be readily generalisable to bounded mixed integer knapsacks

5
4 … and can be used to solve all integer programming problems (if you know the right multiplier)… Surrogate Duality Max x cx Subject to: Ax ≤b x B n ≡ Min u (Max x cx Subject to: uAx≤ub x B n ) u R n + Any integer programme with many constraints… … can be solved by finding the right knapsack problem. uAx≤ub is called a surrogate knapsack …trouble is, finding the u that minimises this problem is difficult, and (a common drawback with dual approaches) the solution to the inner problem is not feasible at any stage until we find an optimal u

6
5 … in psuedo-polynomial time! Solving a knapsack problem in pseudo-polynomial time: Dynamic Progamming Let KP(n,b)= Max x c 1 x 1 +c 2 x 2 +…+c n x n s.t. a 1 x 1 +a 2 x 2 +…+a n x n ≤b Now: KP(0,b’)=0 (Base Case 1) KP(k,l), l<0 = -∞ (Base Case 2) KP(n,b) = max(KP(n-1,b), c n +KP(n-1,b-a n )) (Recursive case) So, assuming the KP is scaled such that the a i are integral, the worst case solution time is proportional to nb.

7
6 … or can at least help solve them… (when you can’t find the right multiplier) Surrogate duality: insane method Find the right multiplier u. Generate all the facets ( x ≤ ) of: uAx≤ub x B n Solve the linear programme: max cx Subject to: x ≤ x B n Finding all the facets (almost never pseudo- polynomial!) seems insane given we can just solve uAx<=ub by dynamic programming … HOWEVER …. Its hard to find the optimal multiplier u* - and because it’s a dual method, if we don’t find u* we don’t even have a feasible solution to the primal problem. Often its easy to find a “probably fairly close to optimal” multiplier u’ (We’ll see some approaches shortly) If u’ is close to u* then we expect: –Some of the facets of u’ and u* may coincide – even getting some of the facets may cut off fractional values in LP based branch-and-bound –Facets of u’ to coincide with facets of u* in many dimensions

8
7 What’s a valid inequality/facet/dimension?

9
8 … inside a Branch-and-bound-and-cut-and-preprocess-and-probe-and-run-a- primal-heuristic-and-variable-fix-and-whatever algorithm Surrogate cutting planes: Inside a branch-and-bound code: Find a half decent multiplier u’. Generate a facet ( x of: u’Ax≤u’b x B n Solve the linear programme: max cx Subject to: Ax≤b x x B n Repeat so long as the x ’s are doing a reasonable job of cutting of fractional LP’s. The remainder of our discussion will focus on: –How to “find a half decent multiplier” –How to “Generate a facet”

10
9 The facets (or high dimension inequalities) we are interested in are called (lifted) knapsack cover inequalities. For binary spaces: A cover C is a set of items that don’t fit into the knapsack. A minimal cover C is a cover that “only just” doesn’t fit – i.e. every strict subset of those items will fit, but the set C won’t. A Knapsack cover inequality (KCI) simply states that therefore one of the items must not be in the knapsack. If C is minimal then the KCI is facet-defining for the space Lifting can then be use to come up with coefficients for the other variables so that we get a facet of the full space

11
10 Generation of Knapsack Cover inequalities is frequently applied to solve sparse binary problems… An integer programme where each variable occurs in only one constraint is separable into the individual knapsack constraints. A=[ ] A sparse constraint matrix is typically “close to separable” e.g. A=[ ] Essentially this suggests that u’=(1,0,0,0,0…), u’=(0,1,0,0,0,…) etc. are pretty reasonable choices for u’. Just solve each of the separate knapsack problems to solve the overall integer programme Here we can’t just solve the separate knapsack problems, but a facet for any of the knapsacks is valid for the whole problem, and because there is little overlap, is likely to be either facet defining, or of high dimension for the whole problem.

12
11 … and has been very successful at reducing the computational effort required

13
12 Knapsacks and infeasibility Most cutting plane algorithms generate cuts aiming to cut off the current LP solution at a feasible node of the branch and bound tree… … but that’s boring- we can also generate useful cutting planes at infeasible nodes … or nodes being pruned … and get a reasonable performance boost

14
13 Most cutting plane algorithms generate cuts aiming to cut off the current LP solution at a feasible node of the branch and bound tree… Cut separation- e.g. Binary knapsack cover inequalities Given the current LP solution xlp Find the minimal Cover C which maximises: (i.e. maximise the amount that the KCI is violated by the current LP) Then lift it.

15
14 … but that’s boring- we can also generate useful cutting planes at infeasible nodes… Recall an infeasible primal unbounded dual a divergent ray (r,z) exists for the current LP: Min cx s.t. Ax=b x i =1, i S 1 (these constraints added by branching. W.lo.g. we assume all the branches were 1-branches) [r is the multiplier for the original constraints, z for the branching constraints] And with some trickery based on the properties: i)rb-z.1 < 0 (improves objective) ii) rA-zIB 0 (definition of a ray) of a divergent ray, we find that: Bz = B {i | zi 0} is a cover for the surrogate knapsack rAx ≤ rb : rAx zIBx, for any x 0 (ii) z.1 for any x | xi=1, i Bz (i) > rb

16
15 … or nodes being pruned Let z BEST be the best integer solution we’ve found so far. Temporarily add the constraint: cx≤z BEST to the formulation at nodes being pruned. … now they are infeasible and we can apply the previous method. [Actually implementation-wise its quite different, as we’d rather not keep pivotting until the LP solver proved it was infeasible with the new constraint] So, at an infeasible node, there’s no LP we’re trying to cut off-how can this approach help?!? Example After adding four branching constraints: x 1 =1, x 2 =1, x 3 =1, x 4 =1 An infeasible node is found. The divergent dual ray (r,z) has z=(0,0,23,12), so B z ={3,4} And the constraint x 3 +x 4 ≤1 is added to the formulation. …This does nothing useful at the node that causes it to be added. … but it does stop us exploring nodes with branching constraints: x 1 =0, x 2 =1, x 3 =1, x 4 =1 x 1 =1, x 2 =0, x 3 =1, x 4 =1 x 1 =0, x 2 =0, x 3 =1, x 4 =1

17
16 … and get a moderate performance boost

18
17 Knapsacks and (bounded) General Integer Variables Whilst you can always convert general integer variables to binary… many of us are not that patient. Ceria et. al. made a start generalising KCI to general integers Extending this approach with some much less natural generalisations… –Ceria constraints aren’t necessarily facet defining for the cover space… –… but lifted SMD subcovers are –On general integers “minimal” is sometimes too small… –… so we redefine minimal to be less small … and get significant computational improvements

19
18 Whilst you can always convert general integer variables to binary… many of us are not that patient.

20
19 Ceria et. al. made a start generalising KCI to general integers…. A cover C is a set of items that don’t fit into the knapsack. A minimal cover C is a cover that “only just” doesn’t fit – i.e. every strict subset of those items will fit, but the set C won’t. A Knapsack cover inequality (KCI) simply states that therefore some of one of the items must not be in the knapsack. If C is minimal then the KCI is facet-defining for the space P=blah Lifting can then be use to come up with coefficients for the other variables so that we get a facet of the full space P=blah BINARY: BOUNDED INTEGER (Ceria et. al.):

21
20 … but they aren’t facet defining for the cover space… Example Consider: 3x 1 +5x 2 +7x 3 ≤ 50, 0 ≤ x i ≤4 C={1,2,3} is the only cover (12 + 20 + 28 = 60 > 50; =10 The Corresponding Ceria inequality is: x 1 +x 2 +x 3 ≤ 10 (4,4,2), (4,2,4) & (2,4,4) are 3 affinely independent points that satisfy this at equality. However only (4,4,2) and (4,2,4) are feasible… this constraint is not facet defining.

22
21 … but we can fix that with Strong minimal dependence. A Cover C is Strongly minimally dependant (SMD) if (additionally) for some k : BUT The Ceria inequality is facet defining iff the cover C happens to be SMD Should C not be SMD: Project out variables by fixing them to their upper bounds, until it is: 3x1+5x2+7x3 ≤ 50, 0 ≤ xi ≤4 {1,2,3} is a cover, but not SMD. Project out x1: 5x2+7x3 ≤ 50-12=48, 0 ≤ xi ≤4 [l=10] {2,3} is SMD (k=2) for the reduced space. x2+x3 ≤6 is a facet for the reduced space [both (2,4) and (4,2) lie on it and are feasible)] By applying lifting to get a constraint for the original space: 0.5x1+x2+x3 ≤8 [which is facet defining- (0,4,4), (4,2,4) & (4,4,2) lie on it] The SMD condition tells us when we need to project and then lift in order to get a facet

23
22 … and sometimes minimal coves are too small… Problem: Any minimal cover with: cannot ever lead to a KCI that cuts off xlp, as is a constraint in the current LP, and clearly: … worse still- for some knapsack/LP solution combinations all minimal covers have this problem… Example: u=(6,4), xlp=(2,2.75) 6x 1 +4x 2 ≤23 {1} is the only minimal cover. The corresponding KCI : x 1 ≤3 does not cut off the current LP solution.

24
23 … we can (often- but not always) fix that with Ceil(x) minimality Solution- Define a more complicated generalisation of minimality: A cover is Ceil-x-minimal if and (Make the cover too big however and no SMD subcover exists) -in binary spaces all ceil-x-minimal covers are also minimal covers so this issue does not arise. -Occasionally no ceil-x-minimal cover exists- and therefore no violated KCI

25
24 … and get significant computational improvements

Similar presentations

Presentation is loading. Please wait....

OK

Geometric Interpretation of Linear Programs

Geometric Interpretation of Linear Programs

© 2018 SlidePlayer.com Inc.

All rights reserved.

To ensure the functioning of the site, we use **cookies**. We share information about your activities on the site with our partners and Google partners: social networks and companies engaged in advertising and web analytics. For more information, see the Privacy Policy and Google Privacy & Terms.
Your consent to our cookies if you continue to use this website.

Ads by Google

Ppt on hard gelatin capsule ingredients Ppt on operational research Ppt on f5 load balancer Ppt on mirror link Ppt on ozone depletion Army ppt on react to possible ied Ppt on conceptual art examples Ppt on cox and kings Ppt on ram and rom history Free download ppt on brain machine interface