Presentation is loading. Please wait.

Presentation is loading. Please wait.

Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory.

Similar presentations


Presentation on theme: "Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory."— Presentation transcript:

1 Local Search

2 Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory and record which alternatives have been explored  The path from initial state to goal state is a solution u Local search  Do not maintain history information  Do not need the path to the solution 2

3 3 Example on next slide

4 4 neighbor = move any queen to other position in the same column For 4-queen, # of neighbors = ?

5 5 What is needed: --A neighborhood function --A “goodness” function needs to give a value to non-solution configurations too for 8 queens: # of number of pair-wise conflicts maximize value = minimize cost

6 Hill-climbing and systematic search Hill-climbing has a lot of freedom in deciding which node to expand next. But it is incomplete even for finite search spaces. –May re-visit the same state multiple times –Good for problems which have solutions Systematic search is complete (because its search tree keeps track of the parts of the space that have been visited). –Good for problems where solutions may not exist, Or the whole point is to show that there are no solutions –or the state-space is densely connected (making repeated exploration of states a big issue).

7 7  When the state-space landscape has local minima, any search that moves only in the greedy direction cannot be complete

8 Making Hill-Climbing Asymptotically Complete Random restart hill-climbing –Keep some bound B. When you made more than B moves, reset the search with a new random initial seed. Start again. “biased random walk”: Avoid being greedy when choosing the seed for next iteration –With probability p, choose the best child; but with probability (1-p) choose one of the children randomly Use simulated annealing –Similar to the previous idea—the probability p itself is increased asymptotically to one (so you are more likely to tolerate a non- greedy move in the beginning than towards the end) With random restart or the biased random walk strategies, we can solve very large problems million queen problems in under minutes!

9 Simulated annealing 9 http://www.freepatentsonline.com/6725437.html u For p = 1- ɛ to 1  With probability p, choose the best child; but with probability (1-p) choose one of the children randomly u You are more likely to tolerate a non-greedy move in the beginning than towards the end Minimize cost

10 Local beam search 10 u Keep track of k states rather than just one u Start with k randomly generated states u At each iteration, all the successors of all k states are generated u If any one is a goal state, stop; else select the k best successors from the complete list and repeat Local beam search = running k random restarts in parallel? × Useful information is passed among the parallel search threads Biased random selection = stochastic beam search

11 Genetic algorithm u Stochastic beam search in which successor states are generated by combining two parent states rather than by modifying a single state u Motivated by evolutionary biology, i.e., sexual reproduction u A successor state is generated by combining two parent states u Start with k randomly generated states (population) u A state is represented as a string over a finite alphabet (often a string of 0s and 1s) u Evaluation function (fitness function). Higher values for better states. u Produce the next generation of states by selection, crossover, and mutation 11

12 12 Fitness function: number of non- attacking pairs of queens Normalized fitness function

13 Summary u Local search avoid the memory problem of systematic search u Can be improved by introducing randomness  Random restart  Biased random walk  Simulated annealing  (Stochastic) beam search  Genetic algorithm 13


Download ppt "Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory."

Similar presentations


Ads by Google