Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS Fall 2016 (Shavlik©), Lecture 9, Week 5

Similar presentations


Presentation on theme: "CS Fall 2016 (Shavlik©), Lecture 9, Week 5"— Presentation transcript:

1 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5
12/6/2018 Today’s Topics Tradeoffs in BFS, DFS, and BEST Dealing with Large OPEN and CLOSED A Clever Combo: Iterative Deepening Beam Search (BEAM) Hill Climbing (HC) HC with Multiple Restarts Simulated Annealing (SA) 10/4/16 CS Fall 2016 (Shavlik©), Lecture 9, Week 5

2 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5
DFS - remember we fill out line n+1 while working on line n Step# OPEN CLOSED X CHILDREN RemainingCHILDREN 1 { S } { } S { SS, BS, CS} { BS, CS } 2 { BS, CS } { S } BS { DB } { DB } 3 { DB, CS } { S, BS } DB { CD, ED } { ED } 4 { ED, CS } { S, BS, DB} ED { GE } { GE } 5 { GE, CS } {S,BS,DB,ED} GE DONE Notice we did not get the shortest path here (BFS did, though) We might want to also record the PARENT of each node reached, so we can easily extract the path from START to GOAL. This was done in the above using SUPERSCRIPTs NOTE: THIS SLIDE WAS ADDED TO LECTURE 8 10/4/16 CS Fall 2016 (Shavlik©), Lecture 9, Week 5

3 Tradeoffs (d = tree depth, b = branching factor)
Method Positives Negatives Breadth Guaranteed to find soln if one exists (all possible solutions generated for each depth before increasing depth) OPEN can become big, O(bd) Finds shortest path (in #arcs traversed) Can be slow Depth Open grows slower, O(b  d) Might not get shortest solution path Might find long solution quickly Can get stuck in infinite spaces Best Provides means of using domain knowledge Requires a good heuristic function 10/4/16 CS Fall 2016 (Shavlik©), Lecture 9, Week 5

4 Memory Needs for OPEN (d = tree depth, b = branching factor)
Breadth Depth 1 b b2 b3 bd Yellow nodes in OPEN Each level has b-1 nodes in OPEN (last level has b), so O(b  d) 10/4/16 CS Fall 2016 (Shavlik©), Lecture 9, Week 5

5 DFS with Iterative Deepening
Combines strengths of BFS and DFS Algo (Fig 3.18) Let k = 0 Loop let OPEN = { startNode } // Don’t use CLOSED (depth limits handles inf loops) do DFS but limit depth to k // If depth = k, don’t generate children if goal node found return solution else if never reached the depth bound of k return FAIL // Searched finite space fully else increment k 10/4/16 CS Fall 2016 (Shavlik©), Lecture 9, Week 5

6 Iterative Deepening Visualized
See Figure 3.19 of text (use blackboard) WE SAVE NO INFORMATION BETWEEN ITERATIONS OF THE LOOP! RECOMPUTE, rather than STORE (a space-time tradeoff; common in CS) At first glance, seems stupid, but … 10/4/16 CS Fall 2016 (Shavlik©), Lecture 9, Week 5

7 Computing the Excess Work
Number of nodes generated in a depth-limited search to depth d with branching factor b totalWork(d, b) = b0 + b1 + b2 + … + bd-2 + bd-1 + bd Number of nodes generated in an iterative deepening search iterDeep_totalWork(d, b) =  totalWork(i, b) i from 1 to d 10/4/16 CS Fall 2016 (Shavlik©), Lecture 9, Week 5

8 Example: Computing Excess Work
totalWork(5, 10) = , , ,000 = 111,111 iterDeep_totalWork(5, 10) = totalWork(0, 10) + totalWork(1, 10) + totalWork(2, 10) + totalWork(3, 10) + totalWork(4, 10) + totalWork(5, 10) = , , ,111 = 123,456 Excess work = (123, ,111) / 111,111 = 11% - not bad! 10/4/16 CS Fall 2016 (Shavlik©), Lecture 9, Week 5

9 BEAM Search: Another Way to Deal with Large Spaces
CS 540 Fall 2015 (Shavlik) 12/6/2018 BEAM Search: Another Way to Deal with Large Spaces Simple idea Never let OPEN get larger than some constant, called the ‘beam width’ Insert children in OPEN, then reduce OPEN to size ‘beam width’ (only need 1 new line in our basic code, open  discardFromBackEndIfTooLong(open, beamWidth) ) Makes most sense with BEST first search, since most promising nodes at front of OPEN The above is a variation of what the text calls “local beam search” (pg ) 10/4/16 CS Fall 2016 (Shavlik©), Lecture 9, Week 5

10 What if No Explicit Goal Test?
Sometimes we don’t have an explicit description of the GOAL If so, we simply aim to maximize (or minimize) the scoring function Eg, design factory with maximal expected profit. Or cheapest assembly cost When no explicit goal, assume goal?(X) always returns FALSE 10/4/16 CS Fall 2016 (Shavlik©), Lecture 9, Week 5

11 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5
Hill Climbing (HC) Only keep ONE child in OPEN,and ONLY IF that child has a better score than the current node (recall the ‘greedy’ d-tree pruning algo) Like BEAM with beam-width = 1, but don’t keep nodes worse than the current one Will stop at LOCAL maxima rather than GLOBAL Sometimes we do ‘valley [gradient] descending’ if lower scores are better, but ideas are identical 10/4/16 CS Fall 2016 (Shavlik©), Lecture 9, Week 5

12 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5
HC with Multiple Restarts (simple but effective, plus runs in parallel) For some tasks, we can start in various initial states (eg, slide the 8-puzzle pieces randomly for 100 moves) Repeat N times Choose a random initial state Do HC; record score and final state if best so far Return best state found Score x x x x State Space 10/4/16 CS Fall 2016 (Shavlik©), Lecture 9, Week 5

13 Simulated Annealing (SA)
HC but sometimes allow downhill moves Over time, the prob of allowing downhill moves gets smaller Let Temperature = 100 X = StartNode // Call the current node X for short LOOP If X is a goal node or Temperature = 0, return X Randomly choose a neighbor, Y If score(Y) > score (X) move to Y // Accept since uphill Else with prob e (score(Y) – score(X)) / Temperature go to Y Reduce Temperature // Need to choose a ‘cooling schedule’ 10/4/16 CS Fall 2016 (Shavlik©), Lecture 9, Week 5

14 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5
Start score = -9 B score = -11 C score = -8 D score = -4 SA Example (Let Temp = 10; scores NEGATED since originally lower was better) Assume at Start and randomly choose B What is prob will move to B? Prob= e (-11 – (-9))/10 = e -0.2 = 0.82 Assume at Start and randomly choose C What is prob will move to C? Prob = 1.0 since an UPHILL (ie, good) move 10/4/16 CS Fall 2016 (Shavlik©), Lecture 9, Week 5

15 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5
Case Analysis of SA Temp >> | score(Y) – score(X) | prob  e0 = 1 so most moves accepted when temp is high Temp  | score(Y) – score(X) | prob  e-1 = // Since score(y) < score(X) Temp << | score(Y) – score(X) | prob  e-∞ = 0 so few moves accepted when temp is low 10/4/16 CS Fall 2016 (Shavlik©), Lecture 9, Week 5

16 Dealing with Large OPEN Lists
Iterative Deepening Keep OPEN small by doing repeated work Still get shortest solution (in #arcs) BEAM Limit OPEN to a max size (the beam-width) Might discard the best (or only!) solution HC (Hill Climbing) Only go to better-scoring nodes Good choice when no explicit GOAL test Stops at local (rather than global) optimum HC with Random Restarts K times start in random state, then go uphill; keep best Might not find the global optimum, but works well in practice SA (Simulated Annealing) Always accept good moves, with some prob make a bad move In the theoretical limit, finds global optimum 10/4/16 CS Fall 2016 (Shavlik©), Lecture 9, Week 5

17 If OPEN Can Get Too Large, What about CLOSED?
If branching factor is b, we add b items to OPEN whenever one item is removed from CLOSED – so OPEN grows faster Items in CLOSED can be hashed, approximated, etc while items in OPEN need to store more info But CLOSE growing too large can still be a problem so often it isn’t used and we live with some repeated work and risk of infinite loops CLOSED not needed in Iter. Deepening and HC; WHY? 10/4/16 CS Fall 2016 (Shavlik©), Lecture 9, Week 5

18 CS 540 - Fall 2016 (Shavlik©), Lecture 9, Week 5
(Partially) Wrap Up Search spaces can grow rapidly Sometimes we give up on optimality and seek satisfactory solutions But sometimes we’re unaware of more powerful search methods and are too simplistic! Various ‘engineering tradeoffs’ exist and best design is problem specific As technology changes, choices change 10/4/16 CS Fall 2016 (Shavlik©), Lecture 9, Week 5


Download ppt "CS Fall 2016 (Shavlik©), Lecture 9, Week 5"

Similar presentations


Ads by Google