Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE 990-06 Advanced Constraint Processing.

Similar presentations


Presentation on theme: "Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE 990-06 Advanced Constraint Processing."— Presentation transcript:

1 Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE 990-06 Advanced Constraint Processing

2 2 Outline Chapter Introduction Greedy Local Search (SLS) Random Walk Properties of Local Search Properties of Local Search Empirical Evaluation Empirical Evaluation Hybrids of Local Search and Inference Effects of Constraint Propagation on SLS Effects of Constraint Propagation on SLS Local Search on Cycle-Cutset Local Search on Cycle-Cutset Chapter Summary

3 3 Introduction Techniques for Solving CSPs Two main categories Two main categories Search (e.g., conditioning) Inference Can be combined (Chapter 10) Can be combined (Chapter 10) CSPs are NP-complete Best algorithms remain exponential in time Best algorithms remain exponential in time Prohibitively expensive (for large problems) Prohibitively expensive (for large problems) Need to approximate to conserve time

4 4 Methods of Approximation Greedy search Requires little memory Requires little memory Very fast (often linear) Very fast (often linear) Can solve some previously insoluble problems (e.g., the million queens problem) Can solve some previously insoluble problems (e.g., the million queens problem) Suffers from plateau, ridge, local minima Suffers from plateau, ridge, local minima Insufficient by itself Combined with randomization and heuristics works extremely well Suitability depends on the problem domain Typically, large problems with plenty of solutions Greedy+Stochastic  Stochastic Local Search (SLS)

5 5 Advantages of SLS Works very well in some domains Can be many orders of magnitude faster than traditional search in these domains Example: The n-queens problem with 1,000,000 queens. Example: The n-queens problem with 1,000,000 queens. SLS can solve the 1,000,000 queens problem in less than a minute, while traditional backtracking search mechanisms can only handle a few hundred queens SLS can solve the 1,000,000 queens problem in less than a minute, while traditional backtracking search mechanisms can only handle a few hundred queens

6 6 How Greedy Search Works Informal description Randomly instantiate all variables Randomly instantiate all variables If any inconsistencies arise, resolve them with local repairs If any inconsistencies arise, resolve them with local repairs Examples: Examples:8-queens Place 8 queens on the board randomly, ignoring diagonal constraints. Place 8 queens on the board randomly, ignoring diagonal constraints. Resolve inconsistencies by moving a queen in its row to a column that removes as many conflicts as possible. Resolve inconsistencies by moving a queen in its row to a column that removes as many conflicts as possible. Traveling Salesperson Randomly select an order of cities Randomly select an order of cities Improve the route by swapping the position of two cities Improve the route by swapping the position of two cities Continue making these swaps until the order cannot be improved Continue making these swaps until the order cannot be improved Possible to call this procedure several times keeping track of the best order Possible to call this procedure several times keeping track of the best order

7 7 Basic Properties of Local Search Not complete Not sound Anytime algorithm Use hill-climbing techniques and heuristics Search space consists of complete states both consistent and inconsistent both consistent and inconsistent Compare with systematic search partial states, always consistent partial states, always consistent

8 8 How Greedy Local Search Works Start by randomly instantiating all variables Algorithm moves between states of the network, moving towards a solution For each step in the algorithm Current state is evaluated by a cost function (e.g., #violated constraints) Current state is evaluated by a cost function (e.g., #violated constraints) Change the value of the (Single) variable that will resolve the most violated constraints (greatest reduction in cost function) Change the value of the (Single) variable that will resolve the most violated constraints (greatest reduction in cost function) Algorithm terminates when cost == 0 (a solution) OR cost == 0 (a solution) OR current assignment can’t be improved (local minimum) current assignment can’t be improved (local minimum) Warning: Greedy Local Search may get stuck in a local minima in which case it must be restarted with a different instantiation (random restart) in which case it must be restarted with a different instantiation (random restart) When the cost == 0, we have reached a global minimum When the cost == 0, we have reached a global minimum

9 9 Stochastic Local Search (SLS) Stochastic: A number of random restarts (MAX_TRIES) Input: constraint network R, # of tries MAX_TRIES, a cost function Output: a solution iff the problem is consistent, otherwise false SLS algorithm Repeat MAX_TRIES times Repeat MAX_TRIES times Initialize R with a random assignment to all variables (a) Repeat until the current assignment cannot be improved If a is consistent, return a as the solution If a is consistent, return a as the solution Else build a list of all the VVP’s that when the variable is assigned the value, give a maximum improvement in the cost of the assignment Else build a list of all the VVP’s that when the variable is assigned the value, give a maximum improvement in the cost of the assignment Pick one of these VVP’s and make the assignment Pick one of these VVP’s and make the assignment Example 7.1.1 (Dechter, Page 199) Problem: SLS still gets stuck in local minima

10 10 Improving SLS Improve the selection of the initial assignment Improve the nature of the local changes considered Some way to escape local minima We can do this with heuristics We can do this with heuristics By using different combinations of heuristics we can get a whole family of SLS algorithms By using different combinations of heuristics we can get a whole family of SLS algorithms

11 11 Heuristics for Improving SLS Plateau Search (diagram of search space) When local minimum is reached, continue search by non-improving sideways moves When local minimum is reached, continue search by non-improving sideways moves Constraint Weighting The cost function is a weighted sum of the violated constraints, defined by where w i is the current weight of constraint C i, and C i (a) = 1 iff a violates constraint C i, and equals 0 otherwise. The cost function is a weighted sum of the violated constraints, defined by where w i is the current weight of constraint C i, and C i (a) = 1 iff a violates constraint C i, and equals 0 otherwise. At each step, the algorithm selects a VVP such that the largest reduction in F is made At each step, the algorithm selects a VVP such that the largest reduction in F is made At local minima, weights are adjusted by increasing by 1 the weight of each violated constraint At local minima, weights are adjusted by increasing by 1 the weight of each violated constraint Thus the current assignment is no longer a local minimum relative to the new cost function Thus the current assignment is no longer a local minimum relative to the new cost function

12 12 More Heuristics for Improving SLS Tabu Search Same technique we learned last semester Same technique we learned last semester Keep a list of the last n VV assignments. When making new assignments, those on the tabu list are forbidden. Keep a list of the last n VV assignments. When making new assignments, those on the tabu list are forbidden. Tie-Breaking Rules Rules that decide between two equally good flips (two or more values that yield the same improvement in the cost function) Rules that decide between two equally good flips (two or more values that yield the same improvement in the cost function) Use historical information (i.e., flip the one least recently modified) Use historical information (i.e., flip the one least recently modified) Value Propagation Use some form of value propagation such as unit resolution or arc-consistency over unsatisfied constraints, whenever a local minimum is reached. Use some form of value propagation such as unit resolution or arc-consistency over unsatisfied constraints, whenever a local minimum is reached.

13 13 More Heuristics for Improving SLS Automating Max-Flips How to decide the value of MAX_TRIES, or how many steps to take during each try? How to decide the value of MAX_TRIES, or how many steps to take during each try? Continue search as long as there is progress Continue search as long as there is progress Progress is defined as finding an assignment that satisfies more constraints than found so far in that particular run For example: every time an improved assignment is found, allow the algorithm to spend an additional amount of time running that is equal to the amount of time it has spent up until that point from the beginning of the try

14 14 Random Walk Strategy Initially developed for solving SAT: Random Walk search is another technique for escaping local minima Replaces a greedy move with a random walk step Replaces a greedy move with a random walk step Random walk step: Random selection among neighboring states Random walk step: Random selection among neighboring states Random Walk strategy combines random walk search with random walk search with greedy bias towards assignments that satisfy more constraints or clauses. greedy bias towards assignments that satisfy more constraints or clauses.Examples A successful algorithm is WalkSAT [Selman & Kautz] A successful algorithm is WalkSAT [Selman & Kautz] Specifically introduced for SAT clauses Simulated annealing Simulated annealing

15 15 Example(1): WalkSAT [Selman & Kautz] Input: (constraint network R, # of flips MAX_FLIPS, MAX_TRIES, probability p) Output: true if the problem is consistent, otherwise false. WalkSAT algorithm Repeat MAX_TRIES times Compare best assignment with a and retain the best Compare best assignment with a and retain the best Create a random instantiation of all variables, called a Create a random instantiation of all variables, called a Repeat MAX_FLIPS times Repeat MAX_FLIPS times If a is a solution, return true and a Else Randomly pick a violated constraint C Randomly pick a violated constraint C Choose a variable to flip: Choose a variable to flip: With probability p, make a random choice With probability 1-p, greedy choice (which minimizes the number of new broken constraints) Flip the value Flip the value


Download ppt "Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE 990-06 Advanced Constraint Processing."

Similar presentations


Ads by Google