Presentation on theme: "1 Restart search techniques for Employee Timetabling Problems Amnon Meisels And Eliezer Kaplansky Ben-Gurion University."— Presentation transcript:
1 Restart search techniques for Employee Timetabling Problems Amnon Meisels And Eliezer Kaplansky Ben-Gurion University
2 Employee Timetabling Problems. ETPs as Constraint Networks Finding solutions to CNs Complete search algorithm Local search Impact of randomization: The IRA Algorithm A dramatic impact Applying Learning Techniques
3 Employee Timetabling Problems Employee Timetabling Problems Different types of employees. Assigned to many different shifts per day and week. Shifts can have many start and end times. Shifts are composed of different tasks. Employees have to be assigned to shifts over the week so that all the required tasks are fulfilled. Assignment satisfy employees personal preferences (rather than perfectly predictable long-term cyclic rosters)
4 Example ETP Example ETP Timetabling nurses in a ward in a large hospital Each nurse is assigned 3-5 shifts per week. Wards have ~30 nurses. Typical daily shifts: morning, evening, night. Weekends have some special shifts. Each nurse has a list of preferred shifts for each week and a list of forbidden shifts, due to her personal constraints. Problem size ~100 weekly assignments.
5 Constraint satisfaction problem (CSP) CSP is defined over a constraint network (CN). CN consists of a finite set of variables, each associated with a domain of values, and a set of constraints. A solution is an assignment of a value to each variable from its domain such that all the constraints are satisfied. A well-known example of a constraint satisfaction problem is k-colorability
6 ETPs as Constraint Networks Variables are shift-task pairs. Employees are the assigned values Ei. Unavailabilities remove values from domains. Conflicts are binary constraints. Limits on number of assignments, either general or specific, are cumulative constraints (i.e. non-binary). Large real-world ETPs have hundreds of variables and hundreds of binary constraints and limits. Domain sizes can be large (tens of employees).
7 ETPs - Constraints ETPs - Constraints Requirements: Shifts need a required number of assigned employees to each of their tasks -- Two Senior_Nurses in Morning_Shifts Ability: Employees are assigned to tasks, according to their abilities -- Certified nurses can’t be assigned to Head_Nurse Availability: Personal preferences of employees restrict their assignment to only a subset of shifts -- Senior doctors are not assigned to Friday_Shifts
8 ETPs - Constraints (II) ETPs - Constraints (II) Conflicts: Employees cannot be assigned to two conflicting shifts -- No Morning_Shift following a Night_Shift Workload: There are bounds on the number of tasks assigned to each employee -- Maximum 10 hours a day, 40 hours per week Regulations: Certain limits are imposed on the number of specific tasks assigned to employee -- at most 3 Night_Shifts in 2 weeks
9 An assignment table Rows for employees. Columns for shifts. Assigned values are tasks
10 Finding solutions to CNs Complete search algorithm - backtracking Local search
11 Complete Search on CNs Complete search algorithm – backtracking Complete search algorithm – backtracking Problem difficulty varies widely Intelligent backtracking methods perform well Heuristics (i.e. variable-ordering) have a strong impact Even small sized problems can be practically unsolvable. Meaningful parameters for discerning problem difficulty are hard to find
12 Local search Form a search space from all possible assignment states. Move on this space, guided by a cost function, attempting to improve the current state. Local search algorithms move locally, in a limited neighborhood (limited moves). Stop if a goal state has been reached or if some criterion on iterations/improvements holds. For pure search problems the cost function can be the number of constraints violations and the goal has cost 0.
13 Boosting Combinatorial Search Through Randomization Carla P. Gomes Cornell University Running the search up to a certain cutoff point and then restarting at the root of the tree. It can be advantageous to terminate searches which appear to be "stuck", exploring a part of the space far from a solution.
14 Cutoff value ??? The best available strategy is a trial-and-error process, where one experiments with various cutoff values (Gomes). Starting at relatively low values, since the optimal cutoff for these problems tends to lie below the median value of the distribution (Gomes) Iterative cutoff improvement While guaranteed to find a solution, if one exist, utilize randomization to climb out of local minima By searching systematically through cutoff space, make the randomized technique into a complete search.
15 Iterative Restart Algorithm - IRA Set an initial value S to the "cutoff" parameter C. Repeat for N times. Run A for a fixed number of C backtracks. If A finds a solution or proves it does not exist, then stop. Otherwise, restart A from the beginning, using an independent random seed, for another C backtrack. Multiply C by F.
16 Preliminary experiments Preliminary experiments Our base line was the best performing backtracking algorithm: Forwards Checking & Conflict-Directed Back Jumping (FC-CBJ) In our preliminary experiments, we compare the number of constraints checks the solver executes for different cutoff parameters.
17 The test problem We randomly generate test problems by adding or removing personal constrains. We use a real world ETP problem: Timetabling of nurses in a large Israeli hospital. all our test problems are solvable. We make sure that all our test problems are solvable.
18 Difficulty vs. Personal constraints (FC – CBJ)
19 Systematic randomization Introducing systematic randomization into complete search algorithms, by selecting a cutoff value and running restart search. Low enough cutoff values dramatically decrease the running time of the complete search The effect is consistent across a large range of constraints density The same set of test problems are now solved faster than (pure) FC - CBJ (by 4 orders of magnitude)
21 Applying Learning Techniques A learning paradigm monitors the search pattern of the solver and records specific parameters of the search IRA - Main parameters of IRA - The number of backtracks performed before restarting the search: The cutoff parameter C. How many times to restart before we increase the cutoff parameter: The N parameter. By what factor to increase the cutoff parameter: The F parameter.
22 The IRA Matrix * Number of tries before increasing Form a matrix of these two parameters and record the search effort for a given problem Matrices for problem class “representatives”.. * Cutoff starting value The other two parameters are: Fix the multiplication factor F of the cutoff parameter at 2
25 Adaptive Solver Find classifying parameters of problems – domain-size; density of constraints; tightness of limits; number of employees; abilities; Accumulate data on search parameters (IRA matrix) Search for repeating patterns of success Tune-up the solver by setting the search parameters to fit problem-class Recognize the correlations of problem-classifying parameters with search parameters
26 Preliminary Conclusions IRA matrix looks promising for small values, for both rows and columns for some problem classes, RS is better than hill-climbing very effectiveETPs Restart (complete) search is very effective on ETPs very low cutoff values seem to perform best (Gomes)