Presentation is loading. Please wait.

Presentation is loading. Please wait.

Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General.

Similar presentations


Presentation on theme: "Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General."— Presentation transcript:

1 Escaping Local Optima

2 Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General Model of Stochastic Local Search Simulated Annealing Tabu search Exhaustive search Hill climbing Random restart General Model of Stochastic Local Search Simulated Annealing Tabu search Exhaustive search Branch and bound Greedy Best first A* Divide and Conquer Dynamic programming Constraint Propagation Exhaustive search Branch and bound Greedy Best first A* Divide and Conquer Dynamic programming Constraint Propagation

3 Escaping local optima Stochastic local search  many important algorithms address the problem of avoiding the trap of local optima (possible source of project topics)  Michalewicz & Fogel focus on two only simulated annealing tabu search

4 Formal model of Stochastic Local Search (SLS): Hoos and Stützle goal: abstract the simple search subroutines from high level control structure experiment with various search methods systematically

5 Formal model of Stochastic Local Search (SLS): Hoos and Stützle Generalized Local Search Machine (GLSM) M M =(Z, z 0, M, m 0, Δ, σ Z, σ Δ, Τ Z, Τ Δ ) Z: set of states (basic search strategies) z 0 ∈ Z : start state M: memory space m 0 ∈ M : start memory state Δ ⊆ Z×Z : transition relation (when to switch to another type of search) σ Z : set of state types; σ Δ : set of transition types Τ Z : Z ➝ σ Z associate states with types Τ Δ : Δ ➝ σ Δ associate transitions with types

6 (0) Basic Hill climbing determine initial solution s while s not local optimum choose s’ in N(s) such that f(s’)>f(s) s = s’ return s M =(Z, z 0, M, m 0, Δ, σ Z, σ Δ, Τ Z, Τ Δ ) Z: {z 0, z 1 } M: {m 0 } //not used in this model Δ ⊆ Z×Z :{ (z o, z 1 ), (z 1, z 1 )} σ Z : { random choice, select better neighbour } σ Δ : { Det} Τ Z : Τ Z (z 0 ) = random choice, Τ Z (z 1 ) = select better neighbour Τ Δ : Τ Δ ((z o, z 1 )) = Det, Τ Δ ((z 1, z 1 )) = Det M =(Z, z 0, M, m 0, Δ, σ Z, σ Δ, Τ Z, Τ Δ ) Z: {z 0, z 1 } M: {m 0 } //not used in this model Δ ⊆ Z×Z :{ (z o, z 1 ), (z 1, z 1 )} σ Z : { random choice, select better neighbour } σ Δ : { Det} Τ Z : Τ Z (z 0 ) = random choice, Τ Z (z 1 ) = select better neighbour Τ Δ : Τ Δ ((z o, z 1 )) = Det, Τ Δ ((z 1, z 1 )) = Det z0z0 z0z0 z1z1 z1z1 Det

7 (1) Randomized Hill climbing determine initial solution s; bestS = s while termination condition not satisfied with probability p choose neighbour s’ at random else //climb if possible choose s’ with f(s’) > f(s) s = s’; if (f(s) > f(bestS)) bestS = s return bestS

8 (1) Randomized Hill climbing M =(Z, z 0, M, m 0, Δ, σ Z, σ Δ, Τ Z, Τ Δ ) Z: {z 0, z 1, z 2 } M: {m 0 } Δ ⊆ Z×Z :{(z o, z 1 ),(z 1, z 1 ),(z o, z 2 ),(z 1, z 2 ),(z 2, z 1 ),(z 2, z 2 )} σ Z : { random choice, select better neighbour, select any neighbour } σ Δ : { Prob(p), Prob(1-p)} Τ Z : Τ Z (z 0 ) = random choice, Τ Z (z 1 ) = select better neighbour Τ Z (z 2 ) = select any neighbour Τ Δ : Τ Δ ((z o, z 1 )) = Prob(p), Τ Δ ((z 1, z 1 )) = Prob(p) Τ Δ ((z o, z 2 )) = Prob(1-p), Τ Δ ((z 1, z 2 )) = Prob(1-p) Τ Δ ((z 2, z 1 )) = Prob(p), Τ Δ ((z 2, z 2 )) = Prob(1-p) Prob(p) z0z0 z0z0 z1z1 z1z1 z2z2 z2z2 Prob(1-p)

9 (2) Variable Neighbourhood determine initial solution s i = 1 repeat choose neighbour s’ in N i (s) with max(f(s’)) if ((f(s’) > f(s)) s = s’ i = 1// restart in first neighbourhood else i = i+1// go to next neighbourhood until i > iMax return s *example using memory to track neighbourhood definition z0z0 z0z0 z1z1 z1z1 Prob(1) i=1 NewBest(T ) i = 1 NewBest(F) i++

10 Hoos and Stützle terminology  transitions: Detdeterministic CDet(R), CDet(not R) conditional deterministic on R Prob(p), Prob(1-p)probabilistic CProb(not R, p), CProb(not R, 1-p) conditional probabilistic

11 Hoos and Stützle terminology  search subroutine(z states): RPrandom pick (usual start) RWrandom walk (any neighbour) BIbest in neighbourhood

12 Some examples RP BI Det CDet(not R) CDet(R) 1. Cprob (not R, p) RPRP RPRP BI CDet(R) RWRW RWRW Prob(p) Prob(1-p) Cprob (not R, p) Cprob (not R, 1-p) Cprob (not R, 1-p) CDet(R) 2.

13 Simulated annealing  metaphor: slow cooling of liquid metals to allow crystal structure to align properly  “temperature” T is slowly lowered to reduce random movement of solution s in solution space

14 Simulated Annealing determine initial solution s; bestS = s T = T 0 while termination condition not satisfied choose s’ in N(s) probabilistically if (s’ is “acceptable”) // function of T and f(s’) s = s’ if (f(s) > f(sBest)) bestS = s update T return bestS RP SA(T) Det: T = T 0 Det: update(T)

15 Accepting a new solution - acceptance more likely if f(s’) > f(s) - as execution proceeds, probability of acceptance of s’ with f(s’) < f(s) decreases (becomes more like hillclimbing) determine initial solution s; bestS = s T = T 0 while termination condition not satisfied choose s’ in N(s) probabilistically if (s’ is “acceptable”) // function of T s = s’ if (f(s) > f(sBest)) bestS = s update T return bestS determine initial solution s; bestS = s T = T 0 while termination condition not satisfied choose s’ in N(s) probabilistically if (s’ is “acceptable”) // function of T s = s’ if (f(s) > f(sBest)) bestS = s update T return bestS

16 the acceptance function T evolves *sometimes p=1 when f(s’)-f(s)> 0

17 Simulated annealing with SAT algorithm p.123 SA-SAT propositions:P 1,… P n expression:F = D 1 D 2 …D k recall cnf: clause D i is a disjunction of propositions and negative props e.g.,P x  ~P y  P z  ~P w fitness function:number of true clauses

18 Inner iteration – SA(T) in GLSM TFFT assign random truth setTFFT repeat for i=1 to 4 FFFT flip truth of prop iFFFT FTFT evaluateFTFT FFTT decide to keep (or not)FFTT FFTF changed valueFFTF FFTT FFTT reduce T RP SA(T) Det: T = T 0 Det: update(T)

19 Tabu search (taboo) always looks for best solution but some choices (neighbours) are ineligible (tabu) ineligibility is based on recent moves: once a neighbour edge is used, it cannot be removed (tabu) for a few iterations search does not stop at local optimum

20 Symmetric TSP example set of 9 cities {A,B,C,D,E,F,G,H,I} neighbour definition based on 2-opt* (27 neighbours) current sequence: B - E - F - H - I - A - D - C - G - B move to 2-opt neighbour B - D - A - I - H - F - E - C - G - B edges B-D and E-C are now tabu i.e., next 2-opt swap cannot involve these edges *example in book uses 2-swap, p 131

21 TSP example, algorithm p 133 how long will an edge be tabu?3 iterations how to track and restore eligibility? data structure to store tabu status of 9*8/2 = 36 edges B - D - A - I - H - F - E - C - G - B recency-based memory ABCDEFGH I00000002 H0000010 G010000 F00000 E0030 D230 C00 B0

22 procedure tabu search in TSP begin repeat until a condition satisfied generate a tour repeat until a (different) condition satisfied identify a set T of 2-opt moves select best admissible (not tabu) move from T make move update tabu list and other vars if new tour is best-so-far update best tour information end This algorithm repeatedly starts with a random tour of the cities. Starting from the random tour, the algorithm repeatedly moves to the best admissible neighbour; it does not stop at a hilltop but continues to move.

23 applying 2-opt with tabu  from the table, some edges are tabu: B - D - A - I - H - F - E - C - G - B  2-opt can only consider:  AI and FE  AI and CG  FE and CG ABCDEFGH I00000002 H0000010 G010000 F00000 E0030 D230 C00 B0

24 importance of parameters  once algorithm is designed, it must be “tuned” to the problem selecting fitness function and neighbourhood definition setting values for parameters  this is usually done experimentally

25 procedure tabu search in TSP begin repeat until a condition satisfied generate a tour repeat until a (different) condition satisfied identify a set T of 2-opt moves select best admissible (not tabu) move from T make move update tabu list and other vars if new tour is best-so-far update best tour information end Choices in ‘tuning’ the algorithm: what conditions to control repeated executions: counted loops, fitness threshhold, stagnancy (no improvement) how to generate first tour (random, greedy, ‘informed’) how to define neighbourhood how long to keep edges on tabu list other vars: e.g., for stagnancy


Download ppt "Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General."

Similar presentations


Ads by Google