Modelling with Finite Domains u Domains and Labelling u Complex Constraints u Labelling u Different Problem Modellings u Efficiency u Using SICStus Prolog.

Slides:



Advertisements
Similar presentations
Unit-iv.
Advertisements

Constraint Satisfaction Problems
Thursday, April 11 Some more applications of integer
CS4026 Formal Models of Computation Part II The Logic Model Lecture 8 – Search and conclusions.
Review: Search problem formulation
CICLOPS 2001 Finite Domain Constraints in SICStus Prolog Mats Carlsson Swedish Institute of Computer Science
IEOR 4004 Final Review part II.
Constraint Satisfaction Problems Russell and Norvig: Chapter
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
1 Modeling and Simulation: Exploring Dynamic System Behaviour Chapter9 Optimization.
Course Summary What have we learned and what are we expected to know?
Logic Programming Lecture 9: Constraint logic programming.
Problems and Their Classes
Local Search Jim Little UBC CS 322 – CSP October 3, 2014 Textbook §4.8
CPSC 322, Lecture 14Slide 1 Local Search Computer Science cpsc322, Lecture 14 (Textbook Chpt 4.8) Oct, 5, 2012.
Constraint Satisfaction Problems
This lecture topic (two lectures) Chapter 6.1 – 6.4, except
1 Constraint Satisfaction Problems A Quick Overview (based on AIMA book slides)
Introduction. IC-Parc2 ECLiPSe Components Constraint Logic Programming system, consisting of  A runtime core Data-driven computation, backtracking, garbage.
This lecture topic (two lectures) Chapter 6.1 – 6.4, except 6.3.3
1 Finite Constraint Domains. 2 u Constraint satisfaction problems (CSP) u A backtracking solver u Node and arc consistency u Bounds consistency u Generalized.
1 Chapter 8: Modelling with Finite Domain Constraints Where we examine how modelling and controlling search interact with finite domain constraints.
Iterative Deepening A* & Constraint Satisfaction Problems Lecture Module 6.
© Imperial College London Eplex: Harnessing Mathematical Programming Solvers for Constraint Logic Programming Kish Shen and Joachim Schimpf IC-Parc.
Branch & Bound Algorithms
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
4 Feb 2004CS Constraint Satisfaction1 Constraint Satisfaction Problems Chapter 5 Section 1 – 3.
Constraint Logic Programming Ryan Kinworthy. Overview Introduction Logic Programming LP as a constraint programming language Constraint Logic Programming.
1 Optimisation Although Constraint Logic Programming is somehow focussed in constraint satisfaction (closer to a “logical” view), constraint optimisation.
Constraint Satisfaction Problems
CSE461: Constraint Programming & Modelling --Constraint Satisfaction Problems-- u Constraint satisfaction problems (CSPs) u A backtracking solver u Node.
Constraint Satisfaction Problem Solving Chapter 5.
Chapter 5 Outline Formal definition of CSP CSP Examples
Dr Eleni Mangina – COURSE: LOGIC PROGRAMMING (during a joint degree with Fudan University in Software Engineering) DEPT. OF COMPUTER SCIENCE UCD LOGIC.
Constraint Logic Programming (CLP) Luis Tari March 10, 2005.
Constraint Satisfaction Problems
1 Constraint Programming: An Introduction Adapted by Cristian OLIVA from Peter Stuckey (1998) Ho Chi Minh City.
CP Summer School Modelling for Constraint Programming Barbara Smith 1.Definitions, Viewpoints, Constraints 2.Implied Constraints, Optimization,
Types of IP Models All-integer linear programs Mixed integer linear programs (MILP) Binary integer linear programs, mixed or all integer: some or all of.
Modelling for Constraint Programming Barbara Smith CP 2010 Doctoral Programme.
Constraint Satisfaction Problems Chapter 6. Review Agent, Environment, State Agent as search problem Uninformed search strategies Informed (heuristic.
Chapter 5 Section 1 – 3 1.  Constraint Satisfaction Problems (CSP)  Backtracking search for CSPs  Local search for CSPs 2.
CP Summer School Modelling for Constraint Programming Barbara Smith 2. Implied Constraints, Optimization, Dominance Rules.
Constraint Satisfaction CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Hande ÇAKIN IES 503 TERM PROJECT CONSTRAINT SATISFACTION PROBLEMS.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
Chapter 5: Constraint Satisfaction ICS 171 Fall 2006.
Chapter 5 Constraint Satisfaction Problems
1 Chapter 3: Finite Constraint Domains Where we meet the simplest and yet most difficult constraints, and some clever and not so clever ways to solve them.
Problem Reduction So far we have considered search strategies for OR graph. In OR graph, several arcs indicate a variety of ways in which the original.
CHAPTER 5 SECTION 1 – 3 4 Feb 2004 CS Constraint Satisfaction 1 Constraint Satisfaction Problems.
1. 2 Outline of Ch 4 Best-first search Greedy best-first search A * search Heuristics Functions Local search algorithms Hill-climbing search Simulated.
Chapter 5 Team Teaching AI (created by Dewi Liliana) PTIIK Constraint Satisfaction Problems.
Instructional Design Document Simplex Method - Optimization STAM Interactive Solutions.
Integer Programming An integer linear program (ILP) is defined exactly as a linear program except that values of variables in a feasible solution have.
Optimization Problems
CSC Modeling with FD Constraints
Artificial Intelligence Problem solving by searching CSC 361
Chapter 6. Large Scale Optimization
Chapter 3: Finite Constraint Domains
Chapter 8: Modelling with Finite Domain Constraints
Constraints and Search
Chapter 3: Finite Constraint Domains
Constraint satisfaction problems
Constraint Satisfaction
CS 8520: Artificial Intelligence
Constraint Satisfaction Problems
Constraint satisfaction problems
Chapter 6. Large Scale Optimization
Presentation transcript:

Modelling with Finite Domains u Domains and Labelling u Complex Constraints u Labelling u Different Problem Modellings u Efficiency u Using SICStus Prolog

Smuggler’s Knapsack Problem Smuggler with knapsack with capacity 9, who needs to choose items to smuggle to make profit at least 30

Simple Example Goal for the smugglers knapsack problem: [W,P,C] :: [0..9], 4*W + 3*P + 2*C ≤ 9, 15*W + 10*P + 7*C ≥ 30. The CLP(FD) system returns unknown. But how do we find a solution? Invoke a complete (backtracking) solver give variable domains

Labelling u Built-in predicate labeling invokes the complete constraint solver u labeling(Vs) takes a list of finite domain variables Vs and finds a solution [W,P,C] :: [0..9], 4*W + 3*P + 2*C ≤ 9, 15*W + 10*P + 7*C ≥ 30, labeling([W,P,C]). has solutions:

Constrain and Generate Typical form of a finite domain program u Define variables and domains u Add constraints to model the problem u Call labelling to invoke complete solver

Modelling 4-Queens Write a CLP(FD) program queens(Q1,Q2,Q3,Q4) to solve the 4-Queens Problem Hint: Use all_different_neq. Q1Q2Q3Q

4-Queens (Cont.) queens(Q1,Q2,Q3,Q4) :- [Q1,Q2,Q3,Q4] :: [1..4], all_different_neq(Q1,Q2,Q3,Q4), all_different_neq(Q1,Q2+1,Q3+2,Q4+3), all_different_neq(Q1,Q2-1,Q3-2,Q4-3), labeling([Q1,Q2,Q3,Q4] ).

Labelling u Most CLP(FD) goals end with a call to labelling which performs a backtracking search. u Typical programs spend most of their time in labelling so it is important that it is efficient. u One great strength of CLP is that the programmer can write their own labelling predicate which takes advantage of the problem structure. I.e. they can program their own search strategy. u To help CLP(FD) provides the built-in predicates u dom(V,D) list of values D in current domain V u maxdomain(V,M) current maximum value M of V u mindomain(V,M) current minimum value M of V

Labelling u Using these it is straightforward to program the built-in labelling predicate. u labeling iterates through each variable V in the list of variables to be labelled, calling indomain(V) to try each of the (remaining) values in the domain of V. labeling([]). labeling([V|Vs]) :- indomain(V), labeling(Vs). indomain(V) :- dom(V,D), member(V,D). try values for V try values in domain

Send+More=Money Example Cryptarithmetic problem, each digit is different and the equation holds smm(S,E,N,D,M,O,R,Y) :- [S,E,N,D,M,O,R,Y] :: [0..9], constrain(S,E,N,D,M,O,R,Y), labeling([S,E,N,D,M,O,R,Y]). constrain(S,E,N,D,M,O,R,Y) :- S ≠ 0, M ≠ 0, all_different_neq([S,E,N,D,M,O,R,Y]), 1000*S + 100*E + 10*N + D *M + 100*O + 10*R + E = 10000*M *O + 100*R + 10*E + Y.

Send+More=Money Example (Cont.) After domain declarations: After then all_different_neq adds disequations (no change) final equation: (one propagation rule) With the current domain:

Send+More=Money Example (Cont.) Hence D(M) = [1..1] Propagation continues arriving at Note: 3 variables are fixed, all domains reduced Next labelling is called...

Send+More=Money Example (Cont.) The call to labelling([S,E,N,D,M,O,R,Y]) u Calls indomain(S) which calls member(S,[9]) -- which succeeds adding the constraint S=9 u Calls indomain(S) which calls member(E,[4,5,6,7]) -- tries adding E=4. Consistency checks lead to failure -- tries adding E=5. This succeeds giving u Calls indomain(N) which calls member(N,[6]). This adds N=6. u Calls indomain(D) which calls member(D,[7]). This adds D=7. u Etc..

Generate and Test Methodology without constraints: first generate a valuation and then test if it is a solution smm(S,E,N,D,M,O,R,Y) :- [S,E,N,D,M,O,R,Y] :: [0..9], labeling([S,E,N,D,M,O,R,Y]), constrain(S,E,N,D,M,O,R,Y). This program requires choices before finding the solution, the previous required 2!

Labelling u There are two choices made in labelling u which variable to label u which value in the domain to try u Default labelling u try variables in order of the given list u try value in order min to max (returned by dom) u We can program different strategies. These can lead to dramatic performance improvement.

First Fail Labelling u One useful heuristic is the first-fail principle “To succeed, try first where you are most likely to fail” labellingff(Vars) :- if Vars = [] then return else choose V  Vars with smallest domain call indomain(V) compute Rest = Vars \{V} call labellingff(Rest)

First-Fail Labelling labelingff([]). labelingff(Vs) :- deleteff(Vs,V,R), indomain(V), labelingff(R). deleteff([V0|Vs],V,R) :- getsize(V0,S), minsize(Vs,S,V0,V,R)). minsize([],_,V,V,[]). minsize([V1|Vs],S0,V0,V,[V0|R]) :- getsize(V1,S1), S1 < S0, minsize(Vs,S1,V1,V,R). minsize([V1|Vs],S0,V0,V,[V1|R]) :- getsize(V1,S1), S1 ≥ S0, minsize(Vs,S0,V0,V,R).

Choice of Domain Value u Value choice only effects order in which branches are explored u Problem specific knowledge may lead to finding a solution faster

N-Queens Example Q1Q2Q3Q Problem of placing N queens on an NxN chessboard so that none can take another. For large N solutions are more likely with values in the middle of variable domains

Middle-Out Ordering indomain_mid(V) :- ordervalues(V,L), member(V,L). ordervalues(V,L) :- dom(V,D), length(D,N), H = N div 2, halvelist(H,D,[],F,S), merge(S,F,L). halvelist(0,L2,L1,L1,L2). halvelist(N,[E|R],L0,L1,L2) :- N ≥ 1, N1 = N - 1, halvelist(N1,R,[E|L0],L1,L2). merge([],L,L). merge([X|L1],L2,[X|L3]) :- merge(L2,L1,L3).

Labelling Efficiency u We can use both variable and value ordering together Different number of choices for N-queens in different ranges

Efficiency u Apart from programming labelling we can do a number of other things to reduce the search space: u Order rules and literals in rule bodies correctly. u Add redundant constraints. u Use complex constraints such as all_different since they have better propagation behaviour. u Use reified constraints. u Try different models for the problem. u Extend the constraint solver to provide a problem specific complex primitive constraint. u Use incomplete heuristic techniques.

Reified Constraints u A reified constraint c  B holds if the Boolean variable B reflects the truth of constraint c. I.e. B=1 if c holds and B=0 if ¬c. u Write a constraint apart(F1,F2,K12) which holds if F1 and F2 are at least K12 apart. u Better to use reified constraint -- choice is delayed until labelling apart(F1,F2,K12) :- F1 ≥ F2+K12. apart(F1,F2,K12) :- F2 ≥ F1+K12. apart(F1,F2,K12) :- [B1,B2] :: [0,1], (F1 ≥ F2+K12  B1), (F2 ≥ F1+K12  B2), 1 ≤ B1 + B2.

Different Problem Modellings u Different views of the problem lead to different models u Depending on solver capabilities one model may require less search to find answer u Look for model with fewer variables u Look for more direct mapping to primitive constraints. u Empirical comparison may be worthwhile

Different Problem Modellings Simple assignment problem: four workers w1,w2,w3,w4 and four products p1,p2,p3,p4. Assign workers to products to make profit ≥ 19 Profit matrix is:

Operations Research Model 16 Boolean variables Bij meaning worker i is assigned product j 11 prim. constraints 28 choices to find all four solutions B23

Better Model Make use of disequality and complex constraints. Four variables W1,W2,W3,W4 corresponding to workers 7 prim. constraints 14 choices to find all four solutions W1 W2 W3 W

Different Model Four variables T1,T2,T3,T4 corresponding to products. 7 prim. constraints 7 choices to find all four solutions T1 T2 T3 T4

Optimisation u So far we have focussed on satisfaction: finding a solution. u CLP can also be used for optimisation. u We use the built in constraint minimize(G,F). This finds the solution to goal G which minimizes the objective function F. u Usually a variant of the backtracking integer optimizer is used.

Optimisation u Consider the Smuggler’s Knapsack Problem [W,P,C] :: [0..9], 4*W + 3*P + 2*C <= 9, 15*W + 10*P + 7*C >= 30, minimize(labeling([W,P,C]), -15*W-10*P-7*C). u The answer is W=1  P=1  C=1  L=-32.

Branch & bound (rept.) Let P be a linear integer program. To solve P 1. solve Ps linear relaxation P ’. 2a. If the solution of P ’ is integer-valued, it is the optimum of P 2b.If the solution contains a fractional variable v create two subproblems: 3.solve the subproblems recursively. 4.The solution to P is the better solution of P ’a,P ’b

Branch & bound (rept.)

Branch & Bound in OPL (rept.) A reflective function allows direct access to the linear relaxation (Simplex value). Branch and bound could therefore be simulated by the search instruction while not bound(x) do let v=simplexValue(x) in let f=floor(v) in let c=ceiling(v) in try x = c endtry;

Branch & Bound in CLP To solve P with relaxed optimum x=v solve P1: (P and x<=floor(v)) P2: (P and x>=ceiling(v)) compare optimum of P1 with optimum of P2 At any point in a CLP computation, the conjunction of all accumulated constraints must be consistent. otherwise backtracking will be triggered backtracking will “forget” all intermediate results The problem (P1 and P2) is inconsistent, because of (X =ceiling(v)). therefore a “naive” version of branch and bound cannot be implemented “naively”

Problem Representation in CLP Solution: For each new sub-problem create a problem instance with new, fresh variables. Problem representation problem([X,Y], -Y) :- 2*Y <=3*X-3, X+Y<=5. lower and upper bound are either integers or the atom ‘unbound’ for unbound, e.g. [ b(3, 7), b(4, unbound) ] Objective Constraints Bounds representation [ b(v1low, v1high), b(v2low, v2high),... ] Problem Variables

Constructing a fresh Problem Instance bounded_problem(Vs, Bounds, Objective) :- bounds(Vs, Bounds), problem(Vs, Objective). bounds([], []). bounds([V|Vs], [b(L,U)|Bs]) :- (L=unbound -> true ; L <= V), (U=unbound -> true ; U >= V), bounds(Vs, Bs). problem([X,Y], -Y) :- 2*Y <=3*X-3, X+Y<=5. bounded_problem(Xs, [b(unbound,unbound), b(unbound,unbound)], F) Xs is the list of problem variables only constrained by the body of problem/2. bounded _problem(Xs, [b(unbound,2), b(unbound, unbound)], F) Xs is a list of variables [X,Y] constrained by the body of problem/2 and the additional bound X<=2.

Constructing Sub Problems new_bounds([], [], [], []). new_bounds([V | Vs], [B | Bs], [BL | BLs], [BR | BRs]) :- (integer(V) -> BL = B, BR = B, new_bounds(Vs, Bs, BLs, BRs) ; B=(L,U), Vlow = floor(V), Vhigh = ceiling(V), BL = b(L,Vlow), BR = b(Vhigh, U), BLs = Bs, BRs = Bs ). Split on the first fractional variable Copy all other bounds Example: new_bounds([2.6, 2.4], // relaxation optimum [b(unbound, unbound), b(unbound, unbound)], // original problem [b(unbound, 2), b(unbound, unbound)], // sub problem A [b(3, unbound), b(unbound, unbound)]) // sub problem B

Exploring the Sub Problems bnb(CBest, CVal, Bnds, Best, BestVals) :- (minimize((F<CBest, bounded_problem(Vs, Bnds, F)), F) -> new_bounds(Vs, Bnds, BndsL, BndsR), (BndsL=BndsR -> Best=F, BestVals=Vs ; bnb(CBest, CVals, BndsL, LBest, LVals), bnb(LBest, LVals, BndsR, Best, BestVals) ; Best=CBest, BestVals=CVals) ). bnb(Current Best Objective Value, Current Best Valuation, Bounds Structure, Improved Objective Value, Improved Valuation) :- Generate fresh problem instance and solve relaxation if improved solution found split relaxation on first fractional variable if no factional variable return last solution else call b&b on both sub problems else return last solution

Using SICStus Prolog as CLP(FD) u SICStus Prolog is a CLP language which uses libraries to mimic CLP(R) and CLP(FD). u To enter CLP(FD) mode ?- use_module(library(clpfd)). u Syntax is similar to examples except u =, u labeling(L) is labeling([],L) (the first argument is a list of options) u Use domain(ListVar,min,max) to initialize domains. u All FD constraints must be made explicit.

Example SICStus CLP(FD) Program smm(S,E,N,D,M,O,R,Y) :- domain([S,E,N,D,M,O,R,Y], 0, 9), constrain(S,E,N,D,M,O,R,Y), labeling([], [S,E,N,D,M,O,R,Y]). constrain(S,E,N,D,M,O,R,Y) :- S #\= 0, M #\= 0, all_different([S,E,N,D,M,O,R,Y]), 1000*S + 100*E + 10*N + D *M + 100*O + 10*R + E #= 10000*M *O + 100*R + 10*E + Y.

Using SICStus Prolog as CLP(QR) u To enter CLP(R) mode (ie linear programming with floating point arithmetic) ?- use_module(library(clpr)). u To enter CLP(Q) mode (ie linear programming with rational arithmetic) ?- use_module(library(clpq)). u Syntax is similar to Prolog but u all constraints must be made explicit with braces: {X+Y =< 2} etc u minimization is called for with the goal minimize(+Objective) u MIP minimization is triggered with bb_inf(+Ints, +Objective, -Inf) Ints is the list of integral decision variables, eg [X,Y]

Example SICStus CLP(R) Program ?- {X>=0, Y>=0, X+Y =< 6, 9*X+5*Y =< 45}, minimize(8*X+5*Y). X = 3.75, Y = 2.25 ?- {X>=0, Y>=0, X+Y =< 6, 9*X+5*Y =< 45}, bb_inf([X,Y], 8*X+5*Y, Opt). Opt = -40.0, {X+Y=<6.0}, {X *Y=<5.0}, {X>=0.0}, {Y>=0.0}

Complex FD-Constraints in Sicstus

Labeling Options in Sicstus Variable Order Value Order

Labeling Options in Sicstus

Reflective FD-Constraints in Sicstus

Summary u CLP is a powerful modelling language for describing constraint problems. u However it is also simple: it allows programmers to define their own application specific constraints, that’s all! u CLP is generic in the underlying constraint domain and solver. However, backtracking search is always provided. u CLP(FD) provides consistency techniques which can be used with backtracking search to solve CSPs.

Summary (Cont.) u But CLP is not just a modelling language: It is a full- fledged programming language with complex data structures. u It can be used to program u Search strategies u Heuristics u Constraint solvers u The key to efficient solution of CSPs is the programmibility of CLP.