Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSM6120 Introduction to Intelligent Systems

Similar presentations


Presentation on theme: "CSM6120 Introduction to Intelligent Systems"— Presentation transcript:

1 CSM6120 Introduction to Intelligent Systems
Informed search

2 Quick review Problem definition Factors Uninformed techniques
Initial state, goal state, state space, actions, goal function, path cost function Factors Branching factor, depth of shallowest solution, maximum depth of any path in search state, complexity, etc. Uninformed techniques BFS, DFS, Depth-limited, UCS, IDS

3 Informed search What we’ll look at: Heuristics Hill-climbing
Best-first search Greedy search A* search By ‘informed search’ we mean that heuristics are used. There are search algorithms that use information to guide the search (e.g. Uniform-cost search) but we don’t call these ‘informed’ as they are using just path-cost information (i.e. exact or known information). Heuristics are inexact – rules of thumb to guide search, but may not be correct all the time.

4 Heuristics A heuristic is a rule or principle used to guide a search
It provides a way of giving additional knowledge of the problem to the search algorithm Must provide a reasonably reliable estimate of how far a state is from a goal, or the cost of reaching the goal via that state A heuristic evaluation function is a way of calculating or estimating such distances/cost Why do we need heuristics? Large state spaces of possible solutions Exponentially based problems Need to have practical ways of determining a solution

5 Heuristics and algorithms
A correct algorithm will find you the best solution given good data and enough time It is precisely specified A heuristic gives you a workable solution in a reasonable time It gives a guided or directed solution

6 Evaluation function There are an infinite number of possible heuristics Criteria is that it returns an assessment of the point in the search If an evaluation function is accurate, it will lead directly to the goal More realistically, this usually ends up as “seemingly-best- search” Traditionally, the lowest value after evaluation is chosen as we usually want the lowest cost or nearest

7 Heuristic evaluation functions
Estimate of expected utility value from a current position E.g. value for pieces left in chess Way of judging the value of a position Humans have to do this as we do not evaluate all possible alternatives These heuristics usually come from years of human experience Performance of a game playing program is very dependent on the quality of the function

8 Heuristics? A ‘largest-first’ heuristic works very well for this kind of problem. The smaller pieces are better left until later as they can fit in more easily with the remaining space. A search that tried putting the smallest pieces in first would take a long time to reach a solution.

9 Heuristics?

10 Heuristic evaluation functions
Must agree with the ordering a utility function would give at terminal states (leaf nodes) Computation must not take long For non-terminal states, the evaluation function must strongly correlate with the actual chance of ‘winning’ The values can be learned using machine learning techniques

11 Heuristics for the 8-puzzle
Number of tiles out of place (h1) Manhattan distance (h2) Sum of the distance of each tile from its goal position Tiles can only move up or down  city blocks 1 2 3 4 5 6 7

12 The 8-puzzle Using a heuristic evaluation function:
h2(n) = sum of the distance each tile is from its goal position e.g. for the first one, tile ‘2’ is 1 tile away from its intended position, tile ‘8’ is 2 moves away, tile ‘3’ is in its intended position = 0, etc.

13 Goal state 1 2 3 4 5 6 7 Current state Current state 1 2 3 4 5 6 7 2 5 3 1 7 6 4 What data structure would you use? What do the initial and goal states look like in this representation? Which uninformed search strategy would be the most appropriate and why? How do h1 and h2 compare? Is one better than the other? What other heuristic could be used? Compare this with h1 and h2 h1=1 h2=1 h1=5 h2= =7

14 Search algorithms Hill climbing Best-first search
Greedy best-first search A*

15 Iterative improvement
Consider all states laid out on the surface of a landscape The height at any point corresponds to the result of the evaluation function Consider a grid containing all of the problem states... The idea of iterative improvement algorithms is to move around the grid, trying to find the lowest points (or highest depending on how the problem is defined), which are optimal solutions. These algorithms usually keep track only of the current state and do not look ahead beyond immediate neighbours.

16 Iterative improvement
Paths typically not retained - very little memory needed Move around the landscape trying to find the lowest valleys - optimal solutions (or the highest peaks if trying to maximise) Useful for hard, practical problems where the state description itself holds all the information needed for a solution Find reasonable solutions in a large or infinite state space Iterative improvement algorithms – hill-climbing and simulated annealing

17 Hill-climbing (greedy local)
Start with current-state = initial-state Until current-state = goal-state OR there is no change in current-state do: a) Get the children of current-state and apply evaluation function to each child b) If one of the children has a better score, then set current- state to the child with the best score Loop that moves in the direction of decreasing (increasing) value Terminates when a “dip” (or “peak”) is reached If more than one best direction, the algorithm can choose at random Consider all possible successors as “one step” from the current state on the landscape. At each iteration, go to The best successor (steepest descent) Any downhill move (first choice) Any downhill move but steeper is more probable (stochastic) All variations get stuck at local minima

18 Hill-climbing (gradient ascent)
From Wikipedia: These are examples of gradient ascent (continuous version of hill-climbing)

19 Hill-climbing drawbacks
Local minima (maxima) Local, rather than global minima (maxima) Plateau Area of state space where the evaluation function is essentially flat The search will conduct a random walk Ridges Causes problems when states along the ridge are not directly connected - the only choice at each point on the ridge requires uphill (downhill) movement Ridge states are special types of local minima (maxima) states. The surrounding area is ‘unfriendly’ and makes it difficult to escape from, in single steps, and so the path peters out when surrounded by ridges.  There are ways of trying to escape from these problems, such as random restart hill-climbing (below), but none of these can ensure success. Random restart hill-climbing Conducts a series of hill-climbing searches Starts at randomly generated initial states Saves best result from any of the searches Can use fixed number of iterations or continue until the best result does not change

20 Best-first search Like hill climbing, but eventually tries all paths as it uses list of nodes yet to be explored Start with priority-queue = initial-state While priority-queue not empty do: a) Remove best node from the priority-queue b) If it is the goal node, return success. Otherwise find its successors c) Apply evaluation function to successors and add to priority-queue Not an accurate name…expanding the best node first would be a straight march to the goal (= hill-climbing). General BFS can use a (local) cost function - Cost of moving to a node, not the total cost of reaching the node from the start position (as in A* and UCS) Search( Start, Goal_test) Open: priority_queue; Closed: hash_table; enqueue(Start, Open, heuristic(Start)); repeat if (empty(Open)) return fail; Node = dequeue(Open); if (Goal_test(Node)) return Node; for each Child of node do if (not find(Child, Closed)) enqueue(Child, Open, heuristic(Child)) insert(Child, Closed)

21 Best-first example Frontier list (priority queue) is on the left, explored list is on the right

22 Best-first search Different best-first strategies have different evaluation functions Some use heuristics only, others also use cost functions: f(n) = g(n) + h(n) For Greedy and A*, our heuristic is: Heuristic function h(n) = estimated cost of the cheapest path from node n to a goal node For now, we will introduce the constraint that if n is a goal node, then h(n) = 0 For a greedy search, evaluation function = heuristic function only

23 Greedy best-first search
Greedy BFS tries to expand the node that is ‘closest’ to the goal assuming it will lead to a solution quickly f(n) = h(n) aka “greedy search” Differs from hill-climbing – allows backtracking Implementation Expand the “most desirable” node into the frontier queue Sort the queue in decreasing order Choose the most promising node using the heuristic, h(n) Greedy best-first search expands the node that appears to be closest to the goal This is different to hill-climbing, as hill-climbing is a put-your-eggs-in-one-basket attempt at getting to a solution/goal state. It doesn’t keep a record of other nodes, so no backtracking can take place. GBFS, on the other hand, operates like hill-climbing but keeps a record of unexplored nodes to try if the current path does not lead to a solution, or looks unlikely to lead to one.

24 Route planning: heuristic??
A reminder of UCS (which does not use heuristics) We’re at Arad and want to get to Bucharest Using UCS, we would visit Zerind next as this has the least cost – which is not the best move ultimately. Call this transition (A -> Z) Expand nodes at Arad, with their associated total path costs: (A -> Z) = 75, (A -> T) = 118, (A -> S) = 140 Priority queue = (A -> Z) = 75, (A -> T) = 118, (A -> S) = 140 Choose (A -> Z) as this is nearer. Remove this from the queue and add its children (nodes expanded at Z)… Expand nodes at Z: (A -> Z -> O) = = 146 Add nodes to the priority queue: Priority queue = (A -> T) = 118, (A -> S) = 140, (A -> Z -> O) = 146 Choose (A -> T) as this is now the best option. Remove this from the queue and add its children… Expand nodes at T: (A -> T -> L) = 229 Add nodes to queue: (A -> S) = 140, (A -> Z -> O) = 146, (A -> T -> L) = 229 Choose (A -> S) as this is now the nearest option. Remove this from the queue and add its children… Expand nodes at S: (A -> S -> F) = 239, (A -> S -> RV) = 220 Add nodes to the queue: (A -> Z -> O) = 146, (A -> S -> RV) = 220, (A -> T -> L) = 229, (A -> S -> F) = 239 Choose (A -> Z -> O). Remove this from the queue and add its children… Expand nodes at O: (A -> Z -> O -> S) = 297 (A -> S -> RV) = 220, (A -> T -> L) = 229, (A -> S -> F) = 239, (A -> Z -> O -> S) = 297

25 Route planning - GBFS

26 Greedy best-first search

27 Route planning 27

28 Greedy best-first search

29 Greedy best-first search
This happens to be the same search path that hill-climbing would produce, as there’s no backtracking involved (a solution is found by expanding the first choice node only, each time). This happens to be the same search path that hill-climbing would produce, as there’s no backtracking involved (a solution is found by expanding the first choice node only, each time).

30 Greedy best-first search
Complete No, GBFS can get stuck in loops (e.g. bouncing back and forth between cities) Time complexity O(bm) but a good heuristic can have dramatic improvement Space complexity O(bm) – keeps all the nodes in memory Optimal No! (A – S – F – B = 450, shorter journey is possible) m is the maximum depth of the search space

31 Practical 2 Implement greedy best-first search for pathfinding
Look at code for AStarPathFinder.java

32 A* search A* (A star) is the most widely known form of Best-First search It evaluates nodes by combining g(n) and h(n) f(n) = g(n) + h(n) where g(n) = cost so far to reach n h(n) = estimated cost to goal from n f(n) = estimated total cost of path through n start h(n) g(n) n goal If we set h(n) = 0 for any node n, then f(n) = g(n), so nodes are considered based on the cost to reach the current node. This is UCS! f(n) = g(n) + h(n) g(n) = cost from the initial state to the current state n h(n) = estimated cost of the cheapest path from node n to a goal node f(n) = evaluation function to select a node for expansion (usually the lowest cost node)

33 A* search When h(n) = h*(n) (h*(n) is actual cost to goal)
Only nodes in the correct path are expanded Optimal solution is found When h(n) < h*(n) Additional nodes are expanded When h(n) > h*(n) Optimal solution can be overlooked

34 Route planning - A*

35 A* search

36 A* search

37 A* search

38 A* search

39 A* search

40 A* search Complete and optimal if h(n) does not overestimate the true cost of a solution through n Time complexity Exponential in [relative error of h x length of solution] The better the heuristic, the better the time Best case h is perfect, O(d) Worst case h = 0, O(bd) same as BFS, UCS Space complexity Keeps all nodes in memory and save in case of repetition This is O(bd) or worse A* usually runs out of space before it runs out of time A* is optimal and complete if h(n) does not overestimate the true cost of a solution through n Pruning eliminates many possibilities if f(n) is non-decreasing Time complexity is still exponential, as is Space (which will cause the real problems) Not practical for large-scale problems

41 A* exercise Node Coordinates SL Distance to K A (5,9) 8.0 B (3,8) 7.3
F (4,5) G (6,5) H (3,3) I (5,3) J (7,2) K (5,1)

42 Solution to A* exercise

43 GBFS exercise Node Coordinates Distance A (5,9) 8.0 B (3,8) 7.3
H (3,3) I (5,3) J (7,2) K (5,1)

44 Solution

45 To think about... f(n) = g(n) + h(n)
What algorithm does A* emulate if we set h(n) = -g(n) - depth(n) h(n) = 0 Can you make A* behave like Breadth-First Search? f(n) = g(n) + h(n) f(n) = g(n) – g(n) – depth(n) = -depth(n). As depth(n) evaluates the depth from the start to node n, -depth(n) reverses this, so we’re preferring deeper nodes which is DFS Uniform-cost search if h(n)=0 To make A* behave like Breadth-First Search we use g(n) = depth(n) and h(n) = 0. From this follows that f(n) = depth(n), which means that all paths at a given level of depth are considered to have the same cost associated with them. In this case A* selects at each time step the shallowest node for expansion.

46 A* search - Mario http://aigamedev.com/open/interviews/mario-ai/
Control of Super Mario by an A* search Source code available Various videos and explanations Written in Java

47 Admissible heuristics
A heuristic h(n) is admissible if for every node n, h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state from n An admissible heuristic never overestimates the cost to reach the goal Example: hSLD(n) (never overestimates the actual road distance) Theorem: If h(n) is admissible, A* is optimal (for tree-search) h_SLD(n) = straight line distance heuristic Admissibility definition A heuristic is admissible with respect to a search method if it guarantees finding the optimal solution first, even when its value is only an estimate If h(n) is consistent (monotone), then the values of f(n) along any path are non-decreasing. In this case, A* is optimal for graph-search. (Note that consistency implies admissibility)

48 Optimality of A* (proof)
Suppose some suboptimal goal G2 has been generated and is in the frontier. Let n be an unexpanded node in the frontier such that n is on a shortest path to an optimal goal G f(G2) = g(G2) since h(G2) = 0 (true for any goal state) g(G2) > g(G) since G2 is suboptimal f(G) = g(G) since h(G) = 0 f(G2) > f(G) from above

49 Optimality of A* (proof)
f(G2) > f(G) h(n) ≤ h*(n) since h is admissible g(n) + h(n) ≤ g(n) + h*(n) f(n) ≤ f(G) (f(G) = g(G) = g(n) + h*(n)) Hence f(G2) > f(n), and A* will never select G2 for expansion Previous proof breaks down if we use GRAPH_SEARCH because it can discard the optimal path to a repeated state if it is not the first one generated. Solution: – Extend GRAPH_SEARCH so that it discards the more expensive of any two paths found to the same node. – Ensure that optimal path to any repeated state is always first followed. This requirement holds if we impose an extra requirement on h(n): consistency (monotonicity)

50 Heuristic functions Admissible heuristic example: for the 8-puzzle h1(n) = number of misplaced tiles h2(n) = total Manhattan distance i.e. no of squares from desired location of each tile h1(S) = ?? h2(S) = ?? Both of these heuristics are admissible in that they never overestimate the amount of work that needs to be done to reach the goal state.

51 Heuristic functions Admissible heuristic example: for the 8-puzzle h1(n) = number of misplaced tiles h2(n) = total Manhattan distance i.e. no of squares from desired location of each tile h1(S) = 6 h2(S) = = 14 However, even though the two heuristics are admissible, h2 is more useful than h1 (it gives a better estimate of the amount of work to be done).

52 Heuristic functions Dominance/Informedness
if h2(n)  h1(n) for all n (both admissible) then h2 dominates h1 and is better for search Typical search costs: (8 puzzle, d = solution length) d = 12 IDS = 3,644,035 nodes A*(h1) = 227 nodes A*(h2) = 73 nodes d = 24 IDS  54,000,000,000 nodes A*(h1) = 39,135 nodes A*(h2) = 1,641 nodes For two A* heuristics h1 and h2, if h1(n) <= h2(n), for all states n in the search space, we say h2 dominates h1 or heuristic h2 is more informed than h1 Domination translates to efficiency: A* using h2 will never expand more nodes than A* using h1 Hence it is always better to use a heuristic function with higher values, provided it does not over-estimate and that the computation time for the heuristic is not too large

53 Heuristic functions Admissible heuristic example: for the 8-puzzle h1(n) = number of misplaced tiles h2(n) = total Manhattan distance i.e. no of squares from desired location of each tile h1(S) = 6 h2(S) = = 14 But how do we come up with a heuristic?

54 Relaxed problems Admissible heuristics can be derived from the exact solution cost of a relaxed version of the problem E.g. If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives the shortest solution If the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives the shortest solution Key point: the optimal solution cost of a relaxed problem is no greater than the optimal solution cost of the real problem

55 Choosing a strategy What sort of search problem is this?
How big is the search space? What is known about the search space? What methods are available for this kind of search? How well do each of the methods work for each kind of problem?

56 Which method? Exhaustive search for small finite spaces when it is essential that the optimal solution is found A* for medium-sized spaces if heuristic knowledge is available Random search for large evenly distributed homogeneous spaces Hill climbing for discrete spaces where a sub-optimal solution is acceptable Tree search Tree search when a lot is known about the search space which is usually discrete Used when a decision can be made at each step as to which direction to search Also when there is a distinct goal Can be exhaustive, and therefore not fast except when the search space is relatively small

57 Summary What is search for? How do we define/represent a problem?
How do we find a solution to a problem? Are we doing this in the best way possible? What if search space is too large? Can use other approaches, e.g. GAs, ACO, PSO...

58 Finally Try the A* exercise on the course website (solutions will be made available later) Next seminar on Monday at 9:30am See the algorithms in action: baur.de/cs.web.mashup.pathfinding.html


Download ppt "CSM6120 Introduction to Intelligent Systems"

Similar presentations


Ads by Google