Presentation is loading. Please wait.

Presentation is loading. Please wait.

Heuristics Some further elaborations of the art of heuristics and examples.

Similar presentations


Presentation on theme: "Heuristics Some further elaborations of the art of heuristics and examples."— Presentation transcript:

1 Heuristics Some further elaborations of the art of heuristics and examples.

2 Goodness of heuristics
If a heuristic is perfect, search work is proportional to solution length S = O(b*d) b is average branching, d depth of solution If h1 and h2 are two heuristics, and h1 < h2 everywhere, then A*(h2) will expand no more nodes than A*(h1) If an heuristic never overestimates more than N % of the least cost, then the found solution is not more than N% over the optimal solution. h()=0 is an admissible trivial heuristic, worst of all In theory, we can always make a perfect heuristics, if we perform a full breadth first search from each node, but that would be pointless.

3 Example of good heuristics for the 8 puzzle
Transparency shows the heuristic f(n)=g(n) + h(n) h(n)=# misplaced tiles On an A* search for the 8 - puzzle

4

5 Graph Search %% Original version
function GRAPH-SEARCH(problem,fringe) returns a solution, or failure closed <- an empty set fringe <- INSERT(MAKE-NODE(INITIAL-STATE[problem]),fringe) loop do if EMPTY?(fringe) then return failure node <- REMOVE-FIRST(fringe) if GOAL-TEST[problem](STATE[node]) then return SOLUTION(node) if STATE[node] is not in closed then add STATE[node] to closed fringe <- INSERT-ALL(EXPAND(node,problem),fringe)

6 Heuristic Best First Search
%% f [problem] is the heuristic selection function of the problem function BEST-FIRST-SEARCH([problem])returns a solution,or failure OPEN <- an empty set // P1 CLOSED <- an empty set // P2 OPEN <- INSERT(MAKE-NODE(INITIAL-STATE[problem]),OPEN) // P3 repeat if EMPTY?(OPEN) then return failure //P4 best <- the lowest f-valued node on OPEN // P5 remove best from OPEN // P6 if GOAL-TEST[problem](STATE[best]) then return SOLUTION(best) // P7 for all successors M of best if STATE[M] is not in CLOSED then // P8 OPEN <-INSERT(successor,OPEN) // P9 add STATE[best] to CLOSED // P10

7 Heuristic Best First Search (A*)Java Pseudocode
// Instantiating OPEN, CLOSED OPEN = new Vector<Node>(); // P1 CLOSED = new Vector<Node>(); // P2 // Placing initial node on OPEN OPEN.add(0, initialnode); // P3 // After initial phase,we enter the main loop // of the A* algorithm while (true) { // Check if OPEN is empty if (OPEN.size() == 0) { // P4 System.out.println("Failure :"); return; // Locate next node on OPEN with heuristic lowIndex = 0; // P5 low = OPEN.elementAt(0).f; for (int i = 0; i < OPEN.size(); i++) { number = OPEN.elementAt(i).f; if (number < low) { lowIndex = i; low = number; // Move selected node from OPEN to n // P6 n = OPEN.elementAt(lowIndex); OPEN.removeElement(n); // Successful exit if n is goal node // P8 if (n.equals(goalnode)) return; // Retrieve all possible successors of n M = n.successors(); //Compute f-,g- and h-value for each successor for (int i = 0; i < M.size(); i++) { Node s = M.elementAt(i); s.g = n.g + s.cost; s.h = s.estimate(goalnode); s.f = s.g + s.h; // Augmenting OPEN with suitable nodes from M for (int i = 0; i < M.size(); i++) // Insert node into OPEN if not on CLOSED// P9 if (!(on CLOSED)) OPEN.add(0, M.elementAt(i)); // Insert n into CLOSED CLOSED.add(0, n); // P10

8 AStar Java Code See exercise 7

9 Example Mouse King Problem
(1,5) (5,5) ___________ | | | | | | | | |X| | | |M| |X| |C| (1,1) (5,1) There is a 5X5 board. At (1,1) there is a mouse M which can move as a king on a chess board. The target is a cheese C at (5,1). There is however a barrier XXXX at (3,1-4) which the mouse cannot go through, but the mouse heuristics is to ignore this.

10 Heuristics for Mouse King
public class MouseKingState extends State { public int[] value; public MouseKingState(int[] v) {value = v;} public boolean equals(State state) {…} public String toString() {…} public Vector<State> successors() {…} public int estimate(State goal) { MouseKingState goalstate = (MouseKingState)goal; int[] goalarray = goalstate.value; int dx = Math.abs(goalarray[0] - value[0]); int dy = Math.abs(goalarray[1] - value[1]); return Math.max(dx, dy); } } // End class MouseKingState

11 Behaviour of Mouse King
The mouse will expand the following nodes: 1,1 2,1 2,2 2,3 1,2 1,3 2,4 1,4 2,5 3,5 4,4 4,3 4,2 5,1 Solution Path: start,f,g,h (1,1),4,0,4 (2,2),4,1,3 (2,3),5,2,3 (2,4),6,3,3 (3,5),8,4,4 (4,4),8,5,3 (4,3),8,6,2 (4,2),8,7,1 (5,1),8,8,0 (1,5) (5,5) _______________ | | 9|10| | | | 8| 7| |11| | | 6| 4| |12| | | 5| 3| |13| | | 1| 2| | |14| (1,1) (5,1) Order of expansion (1,5) (5,5) _______________ | | |5 | | | | | 4| |6 | | | | 3| |7 | | | | 2| |8 | | | 1| | | |9 | (1,1) (5,1) Solution Path

12 Perfect Heuristics Behaviour
If the heuristics had been perfect, the expansion of the nodes would have been equal to the solution path. 1,1 2,2 2,3 2,4 3,5 4,4 4,2 4,2 5,1 This means that a perfect heuristics ”encodes” all the relevant knowledge of the problem space. Solution Path: start,f,g,h (1,1),8,0,8 (2,2),8,1,7 (2,3),8,2,6 (2,4),8,3,5 (3,5),8,4,4 (4,4),8,5,3 (4,3),8,6,2 (4,2),8,7,1 (5,1),8,8,0 (1,5) (5,5) _______________ | | |5 | | | | | 4| |6 | | | | 3| |7 | | | | 2| |8 | | | 1| | | |9 | (1,1) (5,1) Order of expansion (1,5) (5,5) _______________ | | |5 | | | | | 4| |6 | | | | 3| |7 | | | | 2| |8 | | | 1| | | |9 | (1,1) (5,1) Solution Path

13 Monotone (consistent) heuristics
A heuristic is monotone if the f-value is non-decreasing along any path from start to goal. This is fulfilled if for every pair of nodes n  n’ f(n) <= f(n’) = g(n’) + h(n’) = g(n) + cost(n,n’) + h(n’) f(n) = g(n) + h(n) Which gives the triangle inequality h(n) <= cost(n.n’) + h(n’) cost(n,n’) h(n’) goal h(n)

14 Properties of monotone heuristics
All monotone heuristics are admissible Monotone heuristic is admissible at all nodes (if h(G)=0) If a node is expanded using a monotone heuristic, A* has found the optimal route to that node Therefore, there is no need to consider a node if it is already found. If this is assumed, and the heuristic is monotone, the algorithm is still admissible If the monotone assumption is not true, we risk a non optimal solution However, most heuristics are monotone

15 Some more notes on heuristics
If h1(n) and h2(n) are admissible heuristics, then the following are also admissible heuristics max(h1(n),h2(n)) α*h1(n) + β*h2(n) where α+β=1

16 Monotone heuristic repair
Suppose a heuristic h is admissible but not consistent, i.e. h(n) > c(n,n’)+h(n’), which means f(n) > f(n’) (f is not monotone) In that case, f’(n’) can be set to f(n) as a better heuristic, (higher but still underestimate), i.e. use h’(n’) = max(h(n’),h(n)-c(n,n’))

17 An example of a heuristics
Consider a knight (”horse”) on a chess board. It can move 2 squares in one direction and 1 to either side. The task is to get from one square to another in fewest possible steps (e.g. A1 to H8). A proposed heuristics could be ManHattanDistance/2 Is it admissible ? Is it monotone ? (Actually, it is not straightforward to find an heuristics that is both admissible and not monotone) 8 | | | | | | | | * | 7 | | | | | | | | | 6 | | | | | | | | | 5 | | | | | | | | | 4 | | | | | | | | | 3 | | | | | | | | | 2 | | | | | | | | | 1 | * | | | | | | | | A B C D E F G H

18 Relaxed problems Many heuristics can be found by using a relaxed (easier, simpler) model of the problem. By definition, heuristics derived from relaxed models are underestimates of the cost of the original problem For example, straight line distance presumes that we can move in straight lines. For the 8-puzzle, the heuristics W(n) = # misplaced tiles would be exact if we could move tiles freely The less relaxed, (and therefore better) heuristic P(n) =  distance from home ( Manhattan distance) would allow tiles to be moved to an adjacent square even though there may already be a tile there.

19 Generalized A* f(n) = *g(n) + *h(n) ==1 A* = 0 Uniform cost
= Greedy search < 0,  = Depth first >  Conservative (still admissible) <  Radical (not admissible) ,  depending on g,h Dynamic

20 Learning heuristics from experience
Where do heuristics come from ? Heuristics can be learned as a computed (linear?) combination of features of the state. Example: 8-puzzle Features: x1(n) : number of misplaced tiles x2(n) : number of adjacent tiles that are also adjacent in the goal state Procedure: make a run of searches from 100 random start states. Let h(ni) be the found minimal cost. n h(n1) x1(n1) x2(n1) … …… …… ………. n100 h(n100) x1(n100) x2(n100) From these, use regression to estimate h(n) = c1*x1(n) + c2*x2(n)

21 Learning heuristics from experience (II)
Suppose the problem is harder than the heuristic (h1) indicates but that the hardness is assumed to be uniform over the state space. Then , it is an estimate to let an improved heuristics h2(x) = *h1(x). | S | | | | | | | | | | | | | | | | | | | | | n | | | | | | | | | | | | | G| Problem: move a piece from S to G using ChessKing heuristics h1(x) = # horisontal/vertical/diagonal moves. h1(S)=7 h1(n) = 4 Assume problem is actual harder (in effect Manhattan disance h2(x), but we dont’ know that). It means g(n) = 6 We then estimate = g(n)/(h1(s)-h1(n)) h2(n) = g(n)/(h1(s)-h1(n)) * h1(n) = 6/(7 – 4) *4 = 8 (correct)

22 Learning heuristics from experience (III)
Suppose the problem is easier than the heuristic (h1) indicates but that the easiness is assumed to be uniform over the state space. Then , it is an estimate to let an improved heuristics h2(x) = *h1(x). | S | | | | | | | | | | | | | | | | | | | | | n | | | | | | | | | | | | | G| Problem: move a piece from S to G using Manhattan heuristics h1(x) = # horisontal/vertical moves h1(S)=14 h1(n) = 8 Assume problem is actual easier (in effect Chess King disance h2(x), but we dont’ know that). It means g(n) = 3 We then estimate = g(n)/(h1(s)-h1(n)) h2(n) = g(n)/(h1(s)-h1(n)) * h1(n) = 3/(14 – 8) *8 = 4 (correct)

23 Practical example (Bus world scenario)
Find an optimal route from one place to another by public transport Nodes: Bus passing events A bus route passes a station at a time Actions: enter bus - leave bus - wait

24 Bus scenario search space
Search space space(2) X time(1) Time Bus 3 wait Space Bus 5 Space

25 Heuristics for bus route planner which route is best ?
T3 = Z N # bus transfers A wait time 1. departure Z wait time before arrival T sum transfer waiting time K sum driving time K3 T2 K2 Equivalent transfer T1 K1 T0 = A

26 Planner discussion (T1+T2) = T critical if rain
If Z is to be minimised, must search backwards Many equivalent transfers (same T and K) In practice, A* is problematic Waiting time A maybe unimportant Solution: Relaxation Find trasees independent of time Eliminate equivalent transfers For each trasee, find best route plan Keep the best of these solutions


Download ppt "Heuristics Some further elaborations of the art of heuristics and examples."

Similar presentations


Ads by Google