Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Constraint Programming

Similar presentations


Presentation on theme: "Introduction to Constraint Programming"— Presentation transcript:

1 Introduction to Constraint Programming
Peter van Beek University of Waterloo

2 Acknowledgements Patrick Prosser Christian Schulte Michael Chase
Abid Malik Tyrel Russell

3 Outline Introduction Constraint propagation Backtracking search
Global constraints Symmetry Modeling

4 Some additional resources
“Handbook of Constraint Programming,” edited by F. Rossi, P. van Beek, T. Walsh “Constraint-Based Local Search”, by Pascal Van Hentenryck and Laurent Michel Integrated Methods for Optimization, by John N. Hooker “Programming with Constraints,” by Kim Marriott, Peter J. Stuckey “Principles of Constraint Programming,” by Krzysztof Apt “Constraint Processing,” by Rina Dechter

5 Outline Introduction Constraint propagation Backtracking search
Global constraints Symmetry Modeling

6 What is constraint programming?
Idea: Solve a problem by stating constraints on acceptable solutions Advantages: constraints often a natural part of problems especially true of difficult combinatorial problems once problem is modeled using constraints, wide selection of solution techniques available Constraint programming is an active area of research draws on techniques from artificial intelligence, algorithms, databases, programming languages, and operations research

7 What is constraint programming?
Constraint programming is similar to mathematical programming declarative user states the constraints general purpose constraint solver, often based on backtracking search, is used to solve the constraints Constraint programming is similar to computer programming extensible user-defined constraints allows user to program a strategy to search for a solution

8 What is constraint programming?
Constraint programming is a problem-solving methodology Model problem Solve model specify in terms of constraints on acceptable solutions define/choose constraint model: variables, domains, constraints define/choose search algorithm define/choose heuristics

9 What is constraint programming?
Constraint programming is a collection of core techniques Modeling deciding on variables/domains/constraints improving the efficiency of a model Solving local consistency constraint propagation global constraints search backtracking search hybrid methods

10 Acknowledgement: Patrick Prosser
Place numbers 1 through 8 on nodes, where each number appears exactly once and no connected nodes have consecutive numbers ? Acknowledgement: Patrick Prosser

11 Backtracking search ? Guess a value, but be prepared to backtrack
Which nodes are hardest to number?

12 Backtracking search ? Which nodes are hardest to number?

13 Backtracking search ? Which are the least constraining values to use?

14 Backtracking search ? 1 8 Symmetry means we don’t need to consider:

15 Inference/propagation
{1,2,3,4,5,6,7,8} ? 1 8 We can now eliminate many values for other nodes

16 Inference/propagation
{3,4,5,6} ? 1 8 {3,4,5,6} By symmetry

17 Inference/propagation
{3,4,5,6} {1,2,3,4,5,6,7,8} ? 1 8 {3,4,5,6}

18 Inference/propagation
{3,4,5,6} {3,4,5,6} ? 1 8 {3,4,5,6} {3,4,5,6} By symmetry

19 Inference/propagation
{3,4,5,6} {3,4,5,6} ? 1 8 {3,4,5,6,7} {2,3,4,5,6} {3,4,5,6} {3,4,5,6}

20 Inference/propagation
{3,4,5,6} {3,4,5,6} ? 1 8 2 7 {3,4,5,6} {3,4,5,6}

21 Inference/propagation
{3,4,5,6} {3,4,5,6} ? 1 8 2 7 {3,4,5,6} {3,4,5,6} And propagate

22 Inference/propagation
{3,4,5} {4,5,6} ? 1 8 2 7 {3,4,5} {4,5,6} Guess a value, but be prepared to backtrack

23 Inference/propagation
{4,5,6} 3 1 ? 8 2 7 {3,4,5} {4,5,6} Guess a value, but be prepared to backtrack

24 Inference/propagation
{4,5,6} 3 1 ? 8 2 7 {3,4,5} {4,5,6} And propagate

25 Inference/propagation
{5,6} 3 1 ? 8 2 7 {4,5} {4,5,6} More propagation?

26 Inference/propagation
3 1 4 5 8 6 2 7 A solution

27 Constraint programming methodology
Model problem Solve model specify in terms of constraints on acceptable solutions define/choose constraint model: variables, domains, constraints define/choose search algorithm define/choose heuristics Constraint Satisfaction Problem

28 Constraint satisfaction problem (CSP)
A CSP is defined by: a set of variables {x1, …, xn} a set of values for each variable dom(x1), …, dom(xn) a set of constraints {C1, …, Cm} A solution to a CSP is a complete assignment to all the variables that satisfies the constraints

29 Given a CSP Determine whether it has a solution or not
Find one solution Find all solutions Find an optimal solution, given some cost function

30 Example domains and constraints
Reals, linear constraints 3x + 4y ≤ 7, 5x – 3y + z = 2 Guassian elimination, linear programming Integers, linear constraints integer linear programming, branch-and-bound Boolean values, clauses Here: finite domains rich constraint languages user-defined constraints global constraints

31 Constraint languages Usual arithmetic operators:
=, , , < , > ,  , + , , *, /, absolute value, exponentiation e.g., 3x + 4y  7, 5x3 – x*y = 9 Usual logical operators: , , ,  (or “if … then”) e.g., if x = 1 then y = 2, x  y  z, (3x + 4y  7)  (x*y = z) Global constraints: alldifferent(x1, …, xn)  pairwise different cardinality(x1, …, xn, l, u)  each value must be assigned to at least l variables and at most u variables Table constraints

32 Constraint model for puzzle
variables v1, …, v8 domains {1, …, 8} constraints | v1 – v2 |  1 | v1 – v3 |  1 | v7 – v8 |  1 alldifferent(v1, …, v8) ?

33 Example: Instruction scheduling
Given a basic-block of code and a multiple-issue pipelined processor, find the minimum length schedule (a + b) + c

34 Example: evaluate (a + b) + c
instructions A r1  a B r2  b C r3  c D r1  r1 + r2 E r1  r1 + r3 3 1 A B D C E dependency DAG

35 Example: evaluate (a + b) + c
non-optimal schedule A r1  a B r2  b nop D r1  r1 + r2 C r3  c E r1  r1 + r3 3 1 A B D C E dependency DAG

36 Example: evaluate (a + b) + c
optimal schedule A r1  a B r2  b C r3  c nop D r1  r1 + r2 E r1  r1 + r3 3 1 A B D C E dependency DAG

37 Constraint model variables A, B, C, D, E domains {1, …, m} constraints
D  A + 3 D  B + 3 E  C + 3 E  D + 1 cardinality(A, B, C, D, E, 0, width) 3 1 A B D C E dependency DAG

38 Example: Boolean satisfiability
(x1  x2  x4)  (x2  x4  x5)  (x3  x4  x5) Given a Boolean formula, does there exist a satisfying assignment

39 Constraint model variables: x1, x2 , x3 , x4 , x5 domains:
{true, false} constraints: (x1  x2  x4) (x2  x4  x5) (x3  x4  x5) (x1  x2  x4)  (x2  x4  x5)  (x3  x4  x5)

40 Example: 3-SAT A solution x1 = false x2 = false x3 = false
x4 = true x5 = false (x1  x2  x4)  (x2  x4  x5)  (x3  x4  x5)

41 Example: Graph coloring
Given k colors, does there exist a coloring of the nodes such that adjacent nodes are assigned different colors

42 Example: 3-coloring variables: v1, v2 , v3 , v4 , v5 v1 v2 domains:
{1, 2, 3} constraints: vi  vj if vi and vj are adjacent v1 v2 v3 v4 v5

43 Example: 3-coloring A solution v1 = 1 v1 v2 = 2 v2 v3 = 2 v4 = 1

44 Example: n-queens Place n-queens on an n  n board so that no pair of queens attacks each other

45 Constraint model variables: x1, x2 , x3 , x4 domains: {1, 2, 3, 4}
constraints: x1  x2  | x1 – x2 |  1 x1  x3  | x1 – x3 |  2 x1  x4  | x1 – x4 |  3 x2  x3  | x2 – x3 |  1 x2  x4  | x2 – x4 |  2 x3  x4  | x3 – x4 |  1 x1 x2 x3 x4 1 2 3 4

46 Example: 4-queens Q Q Q Q x1 x2 x3 x4 A solution x1 = 2 x2 = 4 x3 = 1

47 A closer look at constraints
An assignment (also called an instantiation) x = a, where a  dom(x), A tuple t over an ordered set of variables {x1, …, xk} is an ordered list of values (a1, …, ak) such that ai  dom(xi), i = 1, …, k can be viewed as a set of assignments {x1 = a1, …, xk = ak} Given a tuple t, notation t[xi] selects out the value for variable xi; i.e.,t[xi] = ai

48 A closer look at constraints
Each constraint C is a relation a set of tuples over some ordered subset of the variables, denoted by vars(C) specifies the allowed combinations of values for the variables in vars(C) The size of vars(C) is known as the arity of the constraint a unary constraint has an arity of 1 a binary constraint has an arity of 2 a non-binary constraint has arity greater than 2

49 Example Let intensional Then extensional (table constraint)
dom(x1) = {1, 2, 3, 4}, dom(x2) = {1, 2, 3, 4} C be the constraint x1  x2  | x1 – x2 |  1 Then vars(C) = {x1, x2} tuples in C = {(1,3), (1,4), (2,4), (3,1), (4,1), (4,2)} C is a binary constraint intensional extensional (table constraint)

50 Constraint programming methodology
Model problem Solve model specify in terms of constraints on acceptable solutions define/choose constraint model: variables, domains, constraints define/choose search algorithm define/choose heuristics Constraint Satisfaction Problem

51 Example constraint systems/languages

52 Application areas scheduling logistics planning
supply chain management rostering timetabling vehicle routing bioinformatics networks configuration assembly line sequencing cellular frequency assignment airport counter and gate allocation airline crew scheduling optimize placement of transmitters for wireless

53 Some commercial applications

54 Testimonial Hi Prof. van Beek,
I am a graduate student from Management Sciences and was in your AI and CP courses last year. I applied some of the CP concepts like redundant modeling and exploiting problem symmetry that you taught us in class to optimization problems at Canadian Tire. This together with the MIP solver was able to give us much better results in a fraction of the time. The integration of CP and OR was at a very high level and not at the solver level, which the Optimization team at Canadian Tire found very encouraging (they do not like anything complex). Following this, I am working as a part time Optimization consultant this term to explore further research avenues for logistics problems that they have. Just thought I should share the CP success story and thank you for introducing me to CP. Best regards, A Grateful Student

55 Outline Introduction Constraint propagation Backtracking search
Global constraints Soft constraints Symmetry Modeling

56 Fundamental insight: Local consistency
A local inconsistency is an instantiation of some of the variables that satisfies the relevant constraints but: cannot be extended to one or more additional variables so cannot be part of any solution Has led to: definitions of conditions that characterize the level of local consistency of a CSP algorithms which enforce these levels of local consistency by removing inconsistencies from the CSP effective backtracking algorithms for finding solutions to a CSP that maintain a level of local consistency during the search

57 Enforcing local consistency: constraint propagation
Here, focus on: Given a constraint, remove a value from the domain of a variable if it cannot be part of a solution according to that constraint

58 Local consistency: arc consistency
Given a constraint C, a value a  dom(x) for a variable x  vars(C) has: a domain support in C if there exists a t  C such that t[x] = a and t[y]  dom(y), for every y  vars(C) i.e., there exists values for each of the other variables (from their respective domains) such that the constraint is satisfied A constraint C is: arc consistent iff for each x  vars(C), each value a  dom(x) has a domain support in C A CSP is: arc consistent if every constraint is arc consistent A CSP can be made arc consistent by repeatedly removing unsupported values from the domains of its variables

59 Arc consistency’s other names
domain consistency hyper-arc consistency generalized arc consistency (GAC)

60 Generic arc consistency algorithm
ac() : boolean Q  all variable/constraint pairs (x, C) while Q  {} do select and remove a pair (x, C) from Q if revise(x, C) then if dom(x) = {} return false else add pairs to Q return true revise(x, C) : boolean change  false for each v  dom(x) do if  t  C s.t. t[x] = v then remove v from dom(x) change  true return change

61 Generic arc consistency algorithm
ac() : boolean Q  all variable/constraint pairs (x, C) while Q  {} do select and remove a pair (x, C) from Q if revise(x, C) then if dom(x) = {} return false else add pairs to Q return true variable x y z domain {1, 2, 3} constraints C1: x < y C2: y < z revise(x, C) : boolean change  false for each v  dom(x) do if  t  C s.t. t[x] = v then remove v from dom(x) change  true return change

62 4-queens: Is it arc consistent?
variables: x1, x2 , x3 , x4 domains: {1, 2, 3, 4} constraints: x1  x2  | x1 – x2 |  1 x1  x3  | x1 – x3 |  2 x1  x4  | x1 – x4 |  3 x2  x3  | x2 – x3 |  1 x2  x4  | x2 – x4 |  2 x3  x4  | x3 – x4 |  1 x1 x2 x3 x4 Q Q 1 Q Q 2 Q Q 3 Q Q 4

63 Improvements Much work on efficient algorithms for table constraints
Special purpose algorithms for global constraints (coming later)

64 Local consistency: bounds consistency
Given a constraint C, a value a  dom(x) for a variable x  vars(C) has: an interval support in C if there exists a t  C such that t[x] = a and t[y] [min(dom(y)), max(dom(y))], for every y  vars(C) i.e., there exists values for each of the other variables (from their respective domains treated as an interval) such that the constraint is satisfied A constraint C is said to be: bounds consistent iff for each x  vars(C), each of the values min(dom(x)) and max(dom(x)) has an interval support in C A CSP is: bounds consistent if every constraint is bounds consistent A CSP can be made bounds consistent by repeatedly removing unsupported values from the domains of its variables

65 Example of bounds consistency: instruction scheduling
variables A, B, C, D, E domains {1, …, m} constraints D  A + 3 D  B + 3 E  C + 3 E  D + 1 cardinality(A, B, C, D, E, 0, width) 3 1 A B D C E dependency DAG

66 Constraint propagation: Bounds consistency
variable A B C D E domain [1, 6]  [1, 3]  [1, 2]  [1, 3]  [1, 2]  [1, 3]  [3, 3]  [4, 6]  [4, 5]  [4, 6]  [5, 6]  [6, 6] constraints D  A + 3 D  B + 3 E  C + 3 E  D + 1 cardinality(A, B, C, D, E, 1)

67 Singleton consistency
Let t be a set of assignments to some of the variables of a CSP P e.g., {x = 1} The CSP induced by t, denoted P|t, is the same as P except that the domain of each variable x in vars(t) contains only one value t[x], the value that has been assigned to x by t e.g., for P|{x = 1}, the domain of x is just {1}; everything else the same A CSP P is: singleton arc consistent iff for all variables x, for all a  dom(x), P|{x=a} is not arc inconsistent singleton bounds consistent iff for all variables x, for all a  dom(x), P|{x=a} is not bounds inconsistent

68 Constraint propagation: Singleton arc consistency
Consider { x1 = 1} variable x1 x2 x3 x4 domain {1, 2, 3, 4} 4 3 2 1 x1 x2 x3 x4 Q constraints x1  x2  | x1 – x2 |  1 x1  x3  | x1 – x3 |  2 x1  x4  | x1 – x4 |  3 x2  x3  | x2 – x3 |  1 x2  x4  | x2 – x4 |  2 x3  x4  | x3 – x4 |  1 ?

69 Constraint propagation: Singleton arc consistency
Consider { x1 = 2} variable x1 x2 x3 x4 domain {1, 2, 3, 4} 4 3 2 1 x1 x2 x3 x4 Q constraints x1  x2  | x1 – x2 |  1 x1  x3  | x1 – x3 |  2 x1  x4  | x1 – x4 |  3 x2  x3  | x2 – x3 |  1 x2  x4  | x2 – x4 |  2 x3  x4  | x3 – x4 |  1 ?

70 Outline Introduction Constraint propagation Backtracking search
Global constraints Symmetry Modeling

71 Constraint programming methodology
Model problem Solve model specify in terms of constraints on acceptable solutions define/choose constraint model: variables, domains, constraints define/choose search algorithm define/choose heuristics

72 Backtracking search CSPs often solved using backtracking search
Many techniques for improving efficiency of a backtracking search algorithm branching strategies, constraint propagation, nogood recording, non-chronological backtracking (backjumping), heuristics for variable and value ordering, portfolios and restart strategies techniques are not always orthogonal; combining can give a multiplicative effect a degradation effect Best combinations of these techniques give robust backtracking algorithms that can routinely solve large, hard instances that are of practical importance

73 Outline Introduction Constraint propagation Backtracking search
branching strategies constraint propagation non-chronological backtracking nogood recording heuristics for variable and value ordering portfolios and restart strategies Global constraints Symmetry Modeling

74 Backtracking search A backtracking search is a depth-first traversal of a search tree search tree is generated as the search progresses search tree represents alternative choices that may have to be examined in order to find a solution method of extending a node in the search tree is often called a branching strategy

75 Generic backtracking algorithm
treeSearch( i : integer ) : integer if all variables assigned a value then return 0 // solution found x  getNextVariable( ) backtrackLevel  i for each branching constraint bi do post( bi ) if propagate( bi ) then backtrackLevel  treeSearch( i + 1 ) undo( bi ) if backtrackLevel < i then return backtrackLevel backtrackLevel  getBacktrackLevel() setNogoods()

76 Branching strategies …
A node p = {b1, …, bj} in the search tree is a set of branching constraints, where bi, 1 ≤ i ≤ j, is the branching constraint posted at level i in search tree A node p is extended by posting a branching constraint to ensure completeness, the constraints posted on all the branches from a node must be mutually exclusive and exhaustive p = {b1, …, bj} p  {bj+1} 1 k

77 Popular branching strategies
Running example: let x be the variable branched on, let dom(x) = {1, …, 6} Enumeration (or d-way branching) variable x is instantiated in turn to each value in its domain e.g., x = 1 is posted along the first branch, x = 2 along second branch, … Binary choice points (or 2-way branching) variable x is instantiated to some value in its domain e.g., x = 1 is posted along the first branch, x  1 along second branch, respectively Domain splitting constraint posted splits the domain of the variable e.g., x  3 is posted along the first branch, x > 3 along second branch, respectively

78 Other branching strategies
Posting non-unary branching constraints, branching strategies that are specific to class of problems Example: job shop scheduling must schedule a set of tasks t1, …, tk on a set of resources let xi be a variable representing the starting time of task ti let di be the fixed duration of task ti idea: serialize the tasks that share a resource consider two task t1 and t2 which share a resource post the constraint x1 + d1 <= x2 along one branch post the constraint x2 + d2 <= x1 along the other branch

79 Outline Introduction Constraint propagation Backtracking search
branching strategies constraint propagation non-chronological backtracking nogood recording heuristics for variable and value ordering portfolios and restart strategies Global constraints Symmetry Modeling

80 Constraint propagation
Effective backtracking algorithms for constraint programming maintain a level of local consistency during the search; i.e., perform constraint propagation A generic scheme to maintain a level of local consistency in a backtracking search is to perform constraint propagation at each node in the search tree if any domain of a variable becomes empty, inconsistent so backtrack

81 Constraint propagation
Backtracking search integrated with constraint propagation has two important benefits 1. removing inconsistencies during search can dramatically prune the search tree by removing deadends and by simplifying the remaining sub-problem 2. some of the most important variable ordering heuristics make use of the information gathered by constraint propagation

82 Maintaining a level of local consistency
Definitions of local consistency can be categorized by whether: only unary constraints need to be posted during constraint propagation; sometimes called domain filtering higher arity constraints may need to be posted In implementations of backtracking domains represented extensionally posting and retracting unary constraints can be done very efficiently important that algorithms for enforcing a level of local consistency be incremental

83 Some backtracking algorithms
Chronological backtracking (BT) naïve backtracking: performs no constraint propagation, only checks a constraint if all of its variables have been instantiated; chronologically backtracks Forward checking (FC) maintains arc consistency on all constraints with exactly one uninstantiated variable; chronologically backtracks Maintaining arc consistency (MAC) maintains arc consistency on all constraints with at least one uninstantiated variable; chronologically backtracks Conflict-directed backjumping (CBJ) backjumps; no constraint propagation

84 Constraint model for 4-queens
variables: x1, x2 , x3 , x4 domains: {1, 2, 3, 4} constraints: x1  x2  | x1 – x2 |  1 x1  x3  | x1 – x3 |  2 x1  x4  | x1 – x4 |  3 x2  x3  | x2 – x3 |  1 x2  x4  | x2 – x4 |  2 x3  x4  | x3 – x4 |  1 x1 x2 x3 x4 1 2 3 4

85 Search tree for 4-queens
x1=1 x1= 4 x1 x2 x3 x4 (1,1,1,1) (2,4,1,3) (3,1,4,2) (4,4,4,4)

86 Chronological backtracking (BT)
Q Q Q Q Q Q

87 Forward checking (FC) Q
Enforce arc consistency on constraints with exactly one variable uninstantiated 4 3 2 1 x1 x2 x3 x4 Q { x1 = 1} constraints: x1  x2  |x1 – x2|  1 x1  x3  |x1 – x3|  2 x1  x4  |x1 – x4|  3

88 Forward checking (FC) on 4-queens
Q Q Q Q

89 Maintaining arc consistency (MAC)
Enforce arc consistency on constraints with at least one variable uninstantiated { x1 = 1} 4 3 2 1 x1 x2 x3 x4 Q constraints: x1  x2  | x1 – x2 |  1 x1  x3  | x1 – x3 |  2 x1  x4  | x1 – x4 |  3 x2  x3  | x2 – x3 |  1 x2  x4  | x2 – x4 |  2 x3  x4  | x3 – x4 |  1 ?

90 Maintaining arc consistency (MAC) on 4-queens

91 Outline Introduction Constraint propagation Backtracking search
branching strategies constraint propagation non-chronological backtracking nogood recording heuristics for variable and value ordering portfolios and restart strategies Global constraints Symmetry Modeling

92 Non-chronological backtracking
Upon discovering a deadend, a backtracking algorithm must retract some previously posted branching constraint chronological backtracking: only the most recently posted branching constraint is retracted non-chronological backtracking: algorithm backtracks to and retracts the closest branching constraint which bears some responsibility for the deadend

93 Conflict-directed backjumping (CBJ)
{x1 = 1} x1 x3 x4 x2 Q 4 3 2 1 x1 {x1 = 1, x2 = 3} Q x2 Q x1 {x1 = 1, x2 = 3 , x4 = 2} x2 x3

94 Outline Introduction Constraint propagation Backtracking search
branching strategies constraint propagation non-chronological backtracking nogood recording heuristics for variable and value ordering portfolios and restart strategies Global constraints Symmetry Modeling

95 Nogood recording One of the most effective techniques known for improving the performance of backtracking search on a CSP is to add redundant (implied) constraints a constraint is redundant if set of solutions does not change when constraint is added Three methods: add hand-crafted constraints during modeling apply a consistency algorithm before solving learn constraints while solving  nogood recording

96 Nogood recording A nogood is a set of assignments and branching constraints that is not consistent with any solution i.e., there does not exist a solutionan assignment of a value to each variable that satisfies all the constraintsthat also satisfies all the assignments and branching constraints in the nogood

97 Example nogoods: 4-queens
3 2 1 Q x1 x3 x4 x2 Set of assignments {x1 = 1, x2 = 3} is a nogood to rule out the nogood, the redundant constraint  (x1 = 1  x2 = 3) could be recorded, which is just x1  1  x2  3 recorded constraints can be checked and propagated just like the original constraints But {x1 = 1, x2 = 3} is not a minimal nogood

98 Nogood recording If the CSP had included the nogood as a constraint, deadend would not have been visited Idea: record nogoods that might be useful later in the search

99 Discovering nogoods Discover nogoods when: Tricky when:
during backtracking search when current set of assignments and branching constraints fails during backtracking search when nogoods have been discovered for every branch set of assignments {x1 = 1, x2 = 1} is a nogood set of assignments {x1 = 1, x2 = 2} is a nogood set of assignments {x1 = 1, x2 = 3} is a nogood set of assignments {x1 = 1, x2 = 4} is a nogood  {x1 = 1} is a nogood Tricky when: backtracking algorithm maintains a level of local consistency in the presence of global constraints Standard in SAT solvers Currently not yet widely used for solving general CSPs

100 Outline Introduction Constraint propagation Backtracking search
branching strategies constraint propagation non-chronological backtracking nogood recording heuristics for variable and value ordering portfolios and restart strategies Global constraints Symmetry Modeling

101 Heuristics for backtracking algorithms
Variable ordering (very important) what variable to branch on next Value ordering (can be important) given a choice of variable, what order to try values

102 Variable ordering Domain dependent heuristics
Domain independent heuristics Static variable ordering fixed before search starts Dynamic variable ordering chosen during search

103 Variable ordering: Possible goals
Minimize the underlying search space static ordering example: suppose x1, x2, x3, x4 with domain sizes 2, 4, 8, 16 compare static ordering x1, x2, x3, x4 vs x4, x3, x2, x1 Minimize expected depth of any branch Minimize expected number of branches Minimize size of search space explored by backtracking algorithm intractable to find “best” variable

104 Variable ordering: Basic idea
Assign a heuristic value to a variable that estimates how difficult it is to find a satisfying value for that variable Principle: most likely to fail first or don’t postpone the hard part

105 Some dynamic variable ordering heuristics
Let rem( x | p ) be the number of values that remain in the domain of variable x after constraint propagation, give a set of branching constraints p. dom: choose the variable x that minimizes: rem( x | p ) dom / deg: divide domain size of a variable by degree of the variable dom / wdeg: divide domain size of a variable by weighted degree of variable

106 Some dynamic variable ordering heuristics
Let rem( x | p ) be the number of values that remain in the domain of variable x after constraint propagation, give a set of branching constraints p. More elaborate schemes, e.g., choose the variable x that minimizes:   rem( y | p  {x = a}) Impact-based variable ordering: count up effect (domain reductions) pick variable with largest effect works well in conjunction with singleton consistency a  dom(x) y where y ranges over some or all unassigned variables

107 Value ordering: Basic idea
Principle: given that we have already chosen the next variable to instantiate, choose first the values that are most likely to succeed Choose the next value for variable x: estimate the number of solutions for each value a for x estimate the probability of a solution for each value a for x Example: choose the variable a  dom(x) that maximizes:  rem( y | p  {x = a}) y where y ranges over some or all unassigned variables

108 Outline Introduction Constraint propagation Backtracking search
branching strategies constraint propagation non-chronological backtracking nogood recording heuristics for variable and value ordering portfolios and restart strategies Global constraints Symmetry Modeling

109 Portfolios Observation: Backtracking algorithms can be quite brittle, performing well on some instances but poorly on other similar instances Portfolios of algorithms have been proposed and shown to improve performance We are the last Dodos on the planet, so I’ve put all of our eggs safely into this basket…

110 Portfolios: Definitions
Given a set of backtracking algorithms and a time deadline d, a portfolio P for a single processor is a sequence of pairs, An algorithm selection portfolio is a portfolio where, A restart strategy portfolio is a portfolio where,

111 Portfolios Instance-based Class-based
intended to be used on an instance of a problem which portfolio determined online Class-based intended to be used on all instances in a class of problems which portfolio determined offline

112 Examples of general portfolios
Increasing levels of constraint propagation Phase 1 bounds consistency Phase 2 singleton bounds consistency Phase 3 singleton singleton bounds consistency Alternative search heuristics

113 Restart strategy portfolio
Randomize backtracking algorithm randomize selection of a value randomize selection of a variable A restart strategy (t1, t2, t3, …, tm) is a sequence idea: randomized backtracking algorithm is run for t1 steps. If no solution is found within that cutoff, the algorithm is restarted and run for t steps, and so on

114 Restart strategies Let f(t) be the probability a randomized backtracking algorithm A on instance x stops after taking exactly t steps f(t) is called the runtime distribution of algorithm A on instance x Given the runtime distribution of an instance, the optimal restart strategy for that instance is given by (t*, t*, t*, …), for some fixed cutoff t* A fixed cutoff strategy is an example of a non-universal strategy: designed to work on a particular instance

115 Universal restart strategies
Non-universal strategies are open to catastrophic failure In contrast to non-universal strategies, universal strategies are designed to be used on any instance Luby strategy Walsh strategy (1, 1, 2, 1, 1, 2, 4, 1, 1, 2, 1, 1, 2, 4, 8, 1, …)  grows linearly (1, r, r2, r3, …), r > 1  grows exponentially

116 Summary: backtracking search
CSPs often solved using backtracking search Many techniques for improving efficiency of a backtracking search algorithm branching strategies, constraint propagation, nogood recording, non-chronological backtracking (backjumping), heuristics for variable and value ordering, portfolios and restart strategies Best combinations of these techniques give robust backtracking algorithms that can routinely solve large, hard instances that are of practical importance

117 Outline Introduction Constraint propagation Backtracking search
Global constraints Symmetry Modeling

118 Global constraints A global constraint is a constraint that can be specified over an arbitrary number of variables Advantages: captures common constraint patterns efficient, special purpose constraint propagation algorithms can be designed

119 Alldifferent constraint
Consists of: set of variables {x1, …, xn} Satisfied iff: each of the variables is assigned a different value

120 Alldifferent: example of arc consistency
Suppose alldifferent(x1, x2, x3, x4) where: dom(x1) = {b, c, d, e} dom(x2) = {b, d} dom(x3) = {a, b, c, d} dom(x4) = {b, d} Enforcing domain consistency yields dom(x1) = {c, e} dom(x3) = {a, c}

121 Alldifferent: algorithm for arc consistency
General idea: based on matching theory applied to variable-value graph Suppose alldifferent(x1, x2, x3, x4) where: dom(x1) = {b, c, d, e} dom(x2) = {b, d} dom(x3) = {a, b, c, d} dom(x4) = {b, d} Construct variable-value graph x1 x2 x3 x4 a b c d e

122 Alldifferent: algorithm for arc consistency
A matching is a subset of the edges such that no two edges share a vertex. A matching covers a set of vertices if each vertex participates in an edge. A matching that covers the variables is a solution to the constraint x1 x2 x3 x4 a b c d e

123 Alldifferent: algorithm for arc consistency
An alldifferent contraint is arc consistent if every edge in the variable-value graph belongs to some matching that covers the variables Remove edges/values that do not belong to some covering matching Example: x b x1 x2 x3 x4 a b c d e

124 Alldifferent: algorithm for arc consistency
An alldifferent constraint is arc consistent if every edge in the variable-value graph belongs to some matching that covers the variables Remove edges/values that do not belong to some covering matching Example: x b So, remove b from dom(x1) x1 x2 x3 x4 a b c d e

125 Alldifferent: example of bounds consistency
Suppose alldifferent(A, B, C, D, E, F) where: dom(A) = {3, 4, 5, 6} dom(B) = {3, 4} dom(C) = {2, 3, 4, 5} dom(D) = {2, 3, 4} dom(E) = {3, 4} dom(F) = {1, 2, 3, 4, 5, 6} Enforcing bounds consistency yields: dom(A) = {6} dom(B) = {3, 4} dom(C) = {5} dom(D) = {2} dom(E) = {3, 4} dom(F) = {1}

126 Alldifferent: algorithm for bounds consistency
General idea: based on Hall intervals Let I be an interval and let vars(I) be the set of variables whose domains are contained in I; i.e., vars(I) = { xi | dom(xi)  I } A Hall interval is an interval I such that | vars(I) | = | I | Example: dom(A) = {3, 4, 5, 6} dom(B) = {3, 4} dom(C) = {2, 3, 4, 5} dom(D) = {2, 3, 4} dom(E) = {3, 4} dom(F) = {1, 2, 3, 4, 5, 6} Hall intervals [3, 4] vars([3,4]) = {B, E} [2, 4] vars([2,4]) = {B, D, E} [2, 5] vars([2,5]) = {B, C, D, E} [2, 6] vars([2,6]) = {A, B, C, D, E} [1, 6] vars([1,6]) = {A, B, C, D, E, F}

127 Alldifferent: algorithm for bounds consistency
If there exists a Hall interval I then any assignment of values to the variables in vars(I) will use all of the values in I So, remove these values from the domains of the other variables Example: dom(A) = {3, 4, 5, 6} dom(B) = {3, 4} dom(C) = {2, 3, 4, 5} dom(D) = {2, 3, 4} dom(E) = {3, 4} dom(F) = {1, 2, 3, 4, 5, 6} Hall intervals [3, 4] vars([3,4]) = {B, E} [2, 4] vars([2,4]) = {B, D, E} [2, 5] vars([2,5]) = {B, C, D, E} [2, 6] vars([2,6]) = {A, B, C, D, E} [1, 6] vars([1,6]) = {A, B, C, D, E, F}

128 Alldifferent example: Sudoku
Each Sudoku has a unique solution that can be reached logically without guessing. Enter digits from 1 to 9 into the blank spaces. Every row must contain one of each digit. So must every column, as must every 3x3 square. 5 3 7 6 1 9 8 4 2

129 Sudoku 5 3 7 6 1 9 8 4 2 … x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13

130 Sudoku 5 3 7 6 1 9 8 4 2 dom(xi) = {1, …, 9}, for all i = 1, …, 81
alldifferent(x1, x2, x3, x4, x5, x6, x7, x8, x9) alldifferent(x1, x10, x19, x28, x37, x46, x55, x64, x73) alldifferent(x1, x2, x3, x10, x11, x12, x19, x20, x21) x1 = 5, x2 = 3, x5 = 7, …, x81 = 9

131 Global cardinality constraint (cardinality)
Consists of: set of variables {x1, …, xn} a domain D = dom(x1)  ∙ ∙ ∙  dom(xn) for each v  D, a pair [lv, uv] Satisfied iff: number of times a value v is assigned to a variable is at least lv and at most uv Special cases include: lv = 0, uv = 1, for all v  D (the alldifferent constraint) lv = 1, uv = 1, for all v  D (the permutation constraint) lv = 1, uv  1, for all v  D

132 Cardinality: example of bounds consistency
Suppose cardinality(A, B, C, D, E, F, G) where: dom(A) = {1, 2} dom(B) = {1, 2} dom(C) = {1, 2} dom(D) = {1, 2} dom(E) = {1, 2, 3} dom(F) = {2, 3, 4, 5} dom(G) = {3, 5} Enforcing bounds consistency yields: dom(A) = {1, 2} dom(B) = {1, 2} dom(C) = {1, 2} dom(D) = {1, 2} dom(E) = {3} dom(F) = {4, 5} dom(F) = {5} 1: [1, 2] 2: [1, 2] 3: [1, 1] 4: [0, 2] 5: [0, 2]

133 Sudoku 5 3 7 6 1 9 8 4 2 D = {1, 2, 3, 4, 5, 6, 7, 8, 9} lv = 1, uv = 1, for all v  D dom(xi) = {1, …, 9}, for all i = 1, …, 81 card(x1, x2, x3, x4, x5, x6, x7, x8, x9) card(x1, x10, x19, x28, x37, x46, x55, x64, x73) card(x1, x2, x3, x10, x11, x12, x19, x20, x21) x1 = 5, x2 = 3, x5 = 7, …, x81 = 9

134 One of the Sudoku “17” 1 4 2 5 7 8 3 9 6 Using cardinality constraints, all of these most difficult instances can be solved with just constraint propagation; i.e., no backtracking

135 Knapsack constraint Consists of: Satisfied iff:
set of variables {x1, …, xn} a scalar value ci for each xi two scalar values L and U Satisfied iff: L   ci xi  U n i =1

136 Knapsack: example of domain consistency
Suppose knapsack constraint 80  27x1 + 37x2 + 45x3 + 53x4  82 where: dom(x1) = {0, 1, 2, 3} dom(x2) = {0, 1, 2, 3} dom(x3) = {0, 1, 2, 3} dom(x4) = {0, 1, 2, 3} Enforcing domain consistency yields dom(x1) = {0, 1, 3} dom(x2) = {0, 1} dom(x3) = {0, 1} dom(x4) = {0, 1} Example of a propagator that solves an NP- Complete problem using a pseudo-polynomial algorithm based on dynamic programming

137 Element constraint Consists of: Satisfied iff:
an array of variables x = [x1, …, xn] an integer variable i a variable y with arbitrary finite domain Satisfied iff: xi = y

138 Element: example of domain consistency
Suppose xi = y where: x = [d, e, h, g]; i.e., dom(x1) = {d}, …, dom(x4) = {g} dom(i) = {2, 3, 4} dom(y) = {a, b, c, d, e, f, g} Enforcing domain consistency yields: dom(i) = {2, 4} dom(y) = {e, g}

139 Lexicographic constraint (lex)
Consists of: an array of variables x = [x1, …, xn] an array of variables y = [y1, …, yn] x lex y satisfied iff: x is lexicographically less than or equal to y (x1 < y1) or (x1 = y1 and x2 < y2) or (x1 = y1 and x2 = y2 and x3 < y3) or … (x1 = y1 and x2 = y2 and … xn = yn) Use: especially useful for symmetry breaking

140 Lexicographic: example of domain consistency
Suppose x = [x1, x2, x3 , x4 , x5] and y = [y1, y2, y3 , y4 , y5] where: Enforcing domain consistency on x lex y yields: X {2} {1,3,4} {1,2,3,4,5} {1,2} {3,4,5} y {0,1,2} {1} {0,1,2,3,4} {0,1} X {2} {1,3,4} {1,2,3,4,5} {1,2} {3,4,5} y {0,1,2} {1} {0,1,2,3,4} {0,1}

141 Other global constraints
Regular constraint sequence of variables values taken by these variables belongs to a given regular language applications: rostering and sequencing problems Cumulative constraint collection of tasks: release time, processing time, deadline, resource consumption resource capacities applications: scheduling Stretch constraint, nvalue constraint, … Optimization global constraints

142 Outline Introduction Constraint propagation Backtracking search
Global constraints Symmetry Modeling

143 Symmetry in constraint models
Many constraint models contain symmetry variables are “interchangeable” values are “interchangeable” variable-value symmetry As a result, when searching for a solution: search tree contains many equivalent subtrees if a subtree does not contain a solution, neither will equivalent subtrees elsewhere in the tree failing to recognize equivalent subtrees results in needless search

144 Example of variable symmetry: block scheduling
A, B, C, D, E domains {1, …, m} constraints D  A + 3 D  B + 3 E  C + 3 E  D + 1 cardinality(A, B, C, D, E, 1) 3 1 A B D C E dependency DAG Variables A and B are symmetric

145 Example of variable symmetry: block scheduling
dependency DAG

146 Example of value symmetry: 3-coloring
variables: v1, v2 , v3 , v4 , v5 domains: {1, 2, 3} constraints: v1  v2 v1  v3 v2  v4 v3  v4 v3  v5 v4  v5 v1 v2 v3 v4 v5

147 Example of value symmetry: 3-coloring
A solution v1 = 1 v2 = 2 v3 = 2 v4 = 1 v5 = 3 v1 v2 v3 v4 Mapping v5

148 Example of value symmetry: 3-coloring
Another solution v1 = 1 v2 = 2 v3 = 2 v4 = 1 v5 = 3 v1 v2 v3 v4 v5

149 Example of value symmetry: 3-coloring
A partial non-solution v1 = 1 v2 = 2 v3 = 3 v1 v2 v3 v4 Another partial non-solution v1 = 1 v2 = 2 v3 = 3 v5 And so on …

150 Example of variable-value symmetry: 4-queens
variables: x1, x2 , x3 , x4 domains: {1, 2, 3, 4} constraints: x1  x2  | x1 – x2 |  1 x1  x3  | x1 – x3 |  2 x1  x4  | x1 – x4 |  3 x2  x3  | x2 – x3 |  1 x2  x4  | x2 – x4 |  2 x3  x4  | x3 – x4 |  1 x1 x2 x3 x4 1 2 3 4

151 Symmetries for 4-queens
2 1 3 4 5 6 7 10 9 8 11 12 13 14 15 16 9 13 5 1 14 10 6 11 15 2 7 3 16 12 8 4 15 16 14 13 12 11 10 7 8 9 6 5 4 3 2 1 8 4 12 16 3 7 11 6 2 15 10 14 1 5 9 13 identity rotate 90 rotate 180 rotate 270 14 13 15 16 9 10 11 6 5 12 7 8 1 2 3 4 3 4 2 1 8 7 6 11 12 5 10 9 16 15 14 13 12 16 8 4 15 11 7 10 14 3 6 2 13 9 5 1 5 1 9 13 2 6 10 7 3 14 11 15 4 8 12 16 horizontal axis vertical axis diagonal 1 diagonal 2

152 Example of variable-value symmetry: 4-queens
A partial non-solution x1 = 1 x1 x2 x3 x4 Another partial non-solution x4 = 4 Q 1 x1 = 1 x1  1 x4 = 4 x4  4 2 3 4

153 A formal definition of symmetry
Let P be a CSP where V = {x1, …, xn} is the set of variables D = dom(x1)    dom(xn) is the set of values A (solution) symmetry of P is a permutation of the set V  D that preserves the set of solutions of P special cases: value ordering symmetry variable ordering symmetry

154 Symmetries and permutations
2 1 3 4 5 6 7 10 9 8 11 12 13 14 15 16 ( ) identity 13 14 15 16 ( ) 9 10 11 12 5 6 7 8 ( ) 1 2 3 4 horizontal axis

155 Symmetries and permutations
x1 x2 x3 x4 2 1 3 4 5 6 7 10 9 8 11 12 13 14 15 16 1 15 16 14 13 12 11 10 7 8 9 6 5 4 3 2 1 x1 = 1  x1 = 1 x1 = 2  x1 = 2 x1 = 1  x4 = 4 x1 = 2  x4 = 3 2 3 4 identity rotate 180 x1 x2 x3 x4 1 13 14 15 16 x1 = 1  x1 = 4 x1 = 2  x1 = 3 2 9 10 11 12 3 5 6 7 8 4 1 2 3 4 horizontal axis

156 Mitigating symmetry in constraint models
Reformulate the constraint model to reduce or eliminate symmetry e.g., use set variables Break symmetry by adding constraints to model leave at least one solution eliminate some/all symmetric solutions and non-solutions Break symmetry during backtracking search algorithm recognize and ignore some/all symmetric parts of the search tree dynamically while searching

157 Breaking symmetry by adding constraints to model: block scheduling
variables A, B, C, D, E domains {1, …, m} constraints D  A + 3 D  B + 3 E  C + 3 E  D + 1 cardinality(A, B, C, D, E, 0, 1) 3 1 A B D C E dependency DAG B  A + 1

158 Breaking symmetry by adding constraints to model: 3-coloring
variables: v1, v2 , v3 , v4 , v5 domains: {1, 2, 3} constraints: vi  vj if vi and vj are adjacent v1 v2 v3 v4 fixing colors in a single clique v5

159 Breaking symmetry by adding constraints to model: 4-queens
variables: x1, x2 , x3 , x4 domains: {1, 2, 3, 4} constraints: xi  xi  | xi – xj |  | i – j | x1 x2 x3 x4 1 2 3 break horizontal symmetry by adding x1 ≤ 2 4 break vertical symmetry by adding x2 ≤ x3 but …

160 Danger of adding symmetry breaking constraints
Q x1 x2 x3 x4 4 3 2 1 x1 x2 x3 x4 Q 1 Q 2 Q 3 Q 4 adding x2 ≤ x3 removes this solution adding x1 ≤ 2 removes this solution

161 Breaking symmetry during backtracking search
Let g() be a permutation Let p be a node in the search tree a set of assignments and branching constraints Suppose node p is to be extended by x = v Post the constraint: (p  g(p)  x  v)  g(x  v)

162 Breaking symmetry during backtracking search: 4-queens
p x1 = 1 x1  1 x4 = 4 x4  4 General form: (p  g(p)  x  v)  g(x  v) Here (p is empty): (x  v)  g(x  v) So, post: (x1  1)  (x1  4  x4  1  x4  4)

163 Outline Introduction Constraint propagation Backtracking search
Global constraints Symmetry Modeling

164 Constraint programming
Model problem specify in terms of constraints on acceptable solutions define variables (denotations) and domains define constraints in some language Solve model define algorithm design heuristics CSP

165 Example constraint systems/languages

166 Unfortunately… Often easy to state a model
Much harder is to design an efficient model given a solver even harder still to design an efficient model + solver + heuristics

167 Importance of the constraint model
“In integer programming, formulating a ‘good’ model is of crucial importance to solving the model.” G. L. Nemhauser and L. A. Wolsey Handbook in OR & MS, 1989 “Same for constraint programming.” Helmut Simonis, expert CP modeler

168 Measures for comparing models
How easy is it to write down, understand, modify, debug, communicate? How computationally difficult is it to solve?

169 Computational difficulty?
What is a good model depends on algorithm Choice of variables defines search space Choice of constraints defines how search space can be reduced how search can be guided

170 Computational complexity
How hard is it to solve a CSP instance? Class of all CSP instances is NP-hard if CSPs can be solved efficiently (polynomially), then so can Boolean satisfiability, set covering, partition, traveling salesperson problem, graph coloring, … so unlikely efficient general-purpose algorithms exists

171 Should we give up? “While a method for computing the solutions to NP-complete problems using a reasonable amount of time remains undiscovered, computer scientists and programmers still frequently encounter NP-complete problems. An expert programmer should be able to recognize an NP-complete problem so that he or she does not unknowingly waste time trying to solve a problem which so far has eluded generations of computer scientists.” Wikipedia, 2009

172 An exponential curve time (seconds) size of instance

173 Another exponential curve
Acceptable solving time Instances that arise in practice

174 Improving model efficiency
Reformulate the model change the denotation of the variables Given a model: add redundant variables add redundant constraints add a redundant model redundant (or implied): does not change the set of solutions and hence are logically redundant add symmetry-breaking add dominance constraints symmetry-breaking and dominance do change the set of solutions must leave at least one solution (satisfaction) or an optimal solution (optimization)

175 Reformulate the model Change the denotation of the variables
i.e., what does assigning a value to a variable mean in terms of the original problem? Example: 4-queens xi = j means place the queen in column i, row j xi = j means place the queen in row i, column j x = [i, j] means place the queen in row i, column j xij = means there is no queen in row i, column j

176 Example: crossword puzzles
1 2 3 4 5 a aardvark aback abacus abaft abalone abandon ... monarch monarchy monarda ... zymurgy zyrian zythum 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

177 Adding redundant (auxiliary) variables
Variables that are abstractions of other variables e.g, decision variables suppose x has domain {1,…,10} add Boolean variable to represent decisions (x < 5), (x  5) Variables that represent constraints (reified constraints) e.g., associate a decision variable x with a constraint so that x takes the value 1 if the constraint is satisfied and 0 otherwise suppose there is the constraint: (y1 + d1 ≤ y2 )  (y2 + d2 ≤ y1) add Boolean variable to represent (y1 + d1 ≤ y2 )

178 Adding redundant constraints
Improve computational efficiency of model by adding “right” constraints dead-ends encountered earlier in search process Three methods: apply a local consistency enforcing algorithm before solving learn constraints (nogoods) while solving add hand-crafted constraints during modeling Can often be explained as projections of conjunctions of a subset of the existing constraints

179 Adding redundant models
Consider two alternative constraint models for the same problem example: 4-queens Combine into one constraint model channeling constraints xi = j  yj = i for all i, j 4 3 2 1 x1 x3 x4 x2 y4 y3 y2 y1 1 3 4 2

180 Outline: putting it all together
Constraint propagation Backtracking search Global constraints Symmetry Modeling

181 Case Study: An Application of Constraint Programming to Superblock Instruction Scheduling

182 Computer architecture: Performing instructions in parallel
Multiple-issue multiple functional units; e.g., ALUs, FPUs, load/store units, branch units multiple instructions can be issued (begin execution) each clock cycle issue width max number of instructions that can be issued each clock cycle on most architectures issue width less than number of functional units

183 Analogy: vehicle assembly line
Computer architecture: Performing instructions in parallel Pipelining overlap execution of instructions on a single functional unit latency of an instruction number of cycles before result is available execution time of an instruction number of cycles before next instruction can be issued on same functional unit serializing instruction instruction that requires exclusive use of entire processor in cycle in which it is issued Analogy: vehicle assembly line

184 Superblock instruction scheduling
assignment of a clock cycle to each instruction needed to take advantage of complex features of architecture sometimes necessary for correctness (VLIW) Basic block straight-line sequence of code with single entry, single exit Superblock collection of basic blocks with a unique entrance but multiple exits Given a target architecture, find schedule with minimum expected completion time

185 Example superblock 1 5 2 40% 60% dependency DAG nodes arcs A:1 D:1 C:1
F:1 G:1 2 B:3 40% 60% 5 dependency DAG nodes one for each instruction labeled with execution time nodes F and G are branch instructions, labeled with probability the exit is taken arcs represent precedence labeled with latencies

186 Example superblock 1 A:1 D:1 C:1 E:1 F:1 G:1 2 B:3 40% 60% 5 optimal cost schedule for 2-issue processor cycle ALU FPU 1 A 2 B 3 4 5 C 6 7 D 8 E 9 F 10 G

187 Approaches Superblock instruction scheduling is NP-complete
Heuristic approaches in all commercial and open-source research compilers greedy list scheduling algorithm coupled with a priority heuristic e.g., dependency height and speculative yield (DHASY) heuristic Here: Optimal approach useful when longer compile times are tolerable e.g., compiling for software libraries, digital signal processing, embedded applications, final production build

188 Assumptions Much previous work assumes an idealized architectural model processor is fully pipelined: every instruction has an execution time of 1 issue width of processor is equal to number of functional units processor contains no serializing instructions However, compiler needs an accurate architectural model to schedule code in best possible manner An architectural model is said to be to realistic if it does not make any of these simplifying assumptions

189 Optimal approaches: State-of-the-art for basic blocks
Idealized architectures Wilken et al. (2000) van Beek & Wilken (2001) Malik et al. (2006) Scale up to largest basic blocks that arise in practice Realistic architectures Ertl & Krall (1991) Kästner & Winkel (2001) Liu & Chow (2002) Do not scale beyond instructions (largest that arise in practice have instructions) Here: Builds on Malik et al. (2006) Scales up to largest basic blocks that arise in practice Applies to realistic architectures

190 Optimal approaches: State-of-the-art for superblocks
Idealized architectures Shobaki & Wilken (2004) Scales up to large superblocks Realistic architectures No work Here: Scales up to larger and more difficult superblocks Applies to realistic architectures

191 Basic constraint model
1 1 variables A, B, C, D, E, F, G domains {1, …, m} constraints B  A + 1, C  A + 1, D  B + 5, …, G  F card(A, B, C, F, G, nALU) card(D, E, nFPU) card(A, …, G, issuewidth) cost function 40F + 60G B C 5 5 D E 2 2 F 40% G 60%

192 Basic constraint model (con’t)
non-fully pipelined instructions B:3 introduce auxiliary variables PB,1 PB,2 introduce additional constraints B + 1 = PB,1 B + 2 = PB,2 card(A, B, PB,1, PB,2 C, F, G, nALU) serializing instructions similar technique

193 Improving the model Add constraints to increase constraint propagation
implied constraints: do not change set of solutions dominance constraints: preserve an optimal solution Here: many constraints added to constraint model in extensive preprocessing stage that occurs once extensive preprocessing effort pays off as model is solved many times

194 Improving the model: Implied constraints
Let nodes i and j define a region; i.e., there is more than one path from i to j i j

195 Improving the model: Implied constraints
Implied constraint added to model: xj  xi + di,j if region small enough, di,j is exactly determined by solving region in isolation if region larger, di,j is a lower bound estimate Implied constraint added to model: xj  xi + di,j only if i and j are articulation nodes and region small enough to be solved quickly and exactly in isolation tight upper bound on distance between i and j in any optimal schedule

196 Implied constraints: xj  xi + di,j
B E D H F G C 1 3

197 Implied constraints: xj  xi + di,j
B E D H F G C 1 3 A F 5 j j+1 j+2 j+3 j+4 j+5 Add: F ≥ A + 5

198 Implied constraints: xj  xi + di,j
B E D H F G C 1 3 A E H j j+1 j+2 j+3 j+4 j+5 Add: H ≥ E + 5 5

199 Implied constraints: xj  xi + di,j
B E D H F G C 1 3 A 9 A j j+1 j+2 j+3 j+4 j+5 j+6 j+7 j+8 j+9 H Add: H ≥ A + 9

200 Improving the model: Dominance constraints
Dominance constraints added to model: xj  xi requires identifying pairs of disjoint, isomorphic subgraphs mapping must preserve instruction types, edges, latencies fast heuristic approach to finding pairs Example {D} and {E} are subgraphs that satisfy conditions Can add edge from D to E B 5 5 D E 2 F

201 Improving the solver: From optimization to satisfaction
Find bounds on cost function upper bound found using list scheduling algorithm Enumerate solutions to cost function (knapsack constraint) lower bound ≤ 40F + 60G ≤ upper bound Step through in increasing order of cost until one is found that can be extended to a solution to entire constraint model testing whether a solution to cost function can be extended is done using a backtracking search algorithm

202 Improving the solver: Portfolios
Use portfolio to improve performance of backtracking search algorithm Increasing levels of constraint propagation Phase 1 bounds consistency Phase 2 singleton bounds consistency Phase 3 singleton singleton bounds consistency Restart and move to next phase if solution not found within time limit

203 Improving the solver: Impact-based variable ordering
During Phases 2 & 3, use impact-based variable ordering heuristic to improve performance of backtracking search algorithm measure the importance of a variable for reducing search space very effective and essentially free as a side-effect of enforcing singleton consistency

204 Case study: Value of constraint programming
Ease of adding constraints to model realistic architectures not clear how to similarly extend previously proposed enumeration and integer programming approaches global constraints Allows and facilitates programming in the computer science sense of the word allowed us to incorporate and fine-tune ideas such as portfolios and impact-based variable ordering heuristics into our solver

205 Coming next … Combining CP and Operations Research, John Hooker, CMU
Modeling in CP, Helmut Simonis, Cork Constraint Computation Centre CP languages, systems, and examples OPL studio, Paul Shaw, ILOG Comet, Laurent Michel, U. of Connecticut


Download ppt "Introduction to Constraint Programming"

Similar presentations


Ads by Google