Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.

Similar presentations


Presentation on theme: "Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016."— Presentation transcript:

1 Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016

2  In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution  State space = set of configurations  Find a configuration satisfying your constraints, e.g., n-queens  In such cases, we can use local search algorithms  Keep a single "current" state, and then shift states, but don’t keep track of paths.  Use very limited memory  Find reasonable solutions in large state spaces. 24 February 2016 2

3  Local search algorithms are useful for solving optimization problems  Find the best possible state according to a given objective function  Optimize the number of products purchased by an E-Commerce user  State: Action taken by the user plus the resulting page-view  No track is kept of the path costs between the states  All that is seen is whether the user is buying more products (or not). 24 February 2016 3

4 4

5  "Like climbing Everest in thick fog with amnesia“  A loop that continually moves in the direction of increasing value, i.e., uphill  Terminates when it reaches a peak where no neighbor has a higher value  Fog with Amnesia: Doesn’t look ahead beyond the immediate neighbors of the current state. 24 February 2016 5

6 6

7 Pick a random point in the search space 2.Consider all the neighbors of the current state 3.Choose the neighbor with the best quality and move to that state 4.Repeat 2 thru 4 until all the neighboring states are of lower quality 5.Return the current state as the solution state. 24 February 2016 7

8  Greedy Local Search: grabs a good neighbor state without thinking about where to go next  However, greedy algos do make good progress generally towards the solution  Unfortunately, hill-climbing  Can get stuck in local maxima  Can be stuck by ridges (a series of local maxima that occur close together)  Can be stuck by plateaux (a flat area in the state space landscape)  Shoulder: if the flat area rises uphill later on  Flat local maximum: no uphill rise exists. 24 February 2016 8

9  Stochastic Hill Climbing: Chooses at random from amongst the uphill moves, based on a probability distribution  First-choice Hill Climbing: Implements stochastic HC by generating successors randomly until one is generated that is better than the current state  Random-restart Hill Climbing: Selects a series of initial nodes randomly until the solution is found. 24 February 2016 9

10  Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency

11 Simulated annealing search  If Value[Next] is close to Value[Current], the assignment is more likely to be accepted.  If the temperature is high, the exponent will be close to zero, and the probability will be close to 1.  As the temperature approaches zero, the exponent approaches -∞, and the probability approaches zero.

12  One can prove: If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1  Widely used in VLSI layout, airline scheduling, etc.

13 13 24 February 2016


Download ppt "Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016."

Similar presentations


Ads by Google