Download presentation

1
**CSC344: AI for Games Lecture 4: Informed search**

Patrick Olivier

2
**Uninformed search: summary**

Breadth-first Depth-first Iterative deepening

3
**Informed search strategies**

Best-first search Greedy best-first search A* search Heuristics Local search algorithms Hill-climbing search Simulated annealing search Local beam search Genetic algorithms

4
Best-first search Important: A search strategy is defined by picking the order of node expansion Uniform cost search uses cost so far: g(n) Best first search uses: evaluation function: f(n) expand most desirable unexpanded node implementation: order the nodes in fringe in decreasing order of f(n) Special cases: greedy best-first search A* search

5
**Greedy best-first search**

Evaluation function f(n) = h(n) Heuristics are rules-of-thumb that are likely (but not guaranteed) to help in problem solving For example: hSLD(n) = straight-line distance from n to Bucharest Greedy best-first search expands the node that appears to be closest to goal

6
**Greedy search: Arad Bucharest**

Straight line distance to Bucharest

7
**Greedy search: Arad Bucharest**

8
**Greedy search: Arad Bucharest**

9
**Greedy search: Arad Bucharest**

10
**Greedy search: Arad Bucharest**

11
**Properties of greedy search**

Complete? In finite space if modified to detect repeated states e.g., Iasi Neamt Iasi Neamt … Time? O(bm) a good heuristic gives dramatic improvement Space? O(bm) keeps all nodes in memory Optimal? No

12
A* search Idea: don’t just use estimate of distance to the goal, but the cost of paths so far Evaluation function: f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to goal f(n) is estimated cost of path through n to goal Class exercise: Arad Bucharest using A* and the staight line distance as hSLD(n)

13
**A* search: Arad Bucharest**

14
**A* search: Arad Bucharest**

15
**A* search: Arad Bucharest**

16
**A* search: Arad Bucharest**

17
**A* search: Arad Bucharest**

18
**A* search: Arad Bucharest**

19
**Admissible heuristics**

A heuristic h(n) is admissible if for every node n, h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state from n. An admissible heuristic never overestimates the cost to reach the goal, i.e. it is optimistic Example: hSLD(n) (never overestimates the actual road distance) Theorem: If h(n) is admissible, A* using tree-search is optimal

20
**Proof of A* optimality of A* (1)**

Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(G2) = g(G2) since h(G2) = 0 g(G2) > g(G) since G2 is suboptimal f(G) = g(G) since h(G) = 0 f(G2) > f(G) from above

21
**Proof of A* optimality of A* (2)**

Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(G2) > f(G) from above h(n) ≤ h^*(n) since h is admissible g(n) + h(n) ≤ g(n) + h*(n) f(n) ≤ f(G) Hence f(G2) > f(n), and A* will never select G2 for expansion

22
**Properties of A* search**

Complete? Yes – unless there are infinitely many nodes f(n) ≤ f(G) Time? Exponential unless the error in the heuristic grows no faster that the logarithm of the actual path cost A* is an optimally efficient (no other algorithm is guaranteed to expand less nodes for the same heuristic) Space? All nodes in memory (same as time complexity) Optimal? Yes

23
**Heuristic functions sample heuristics for 8-puzzle: h1(S) = ?**

h1(n) = number of misplaced tiles h2(n) = total Manhattan distance h1(S) = ? h2(S) = ?

24
**Heuristic functions sample heuristics for 8-puzzle: h1(S) = 8**

h1(n) = number of misplaced tiles h2(n) = total Manhattan distance h1(S) = 8 h2(S) = = 18 dominance: h2(n) ≥ h1(n) for all n (both admissible) h2 is better for search (closer to perfect) less nodes need to be expanded

25
**Example of dominance randomly generate 8-puzzle problems**

100 examples for each solution depth contrast behaviour of heuristics & strategies d 2 4 6 8 10 12 14 16 18 20 22 24 IDS 112 680 6384 47127 … A*(h1) 13 39 93 227 539 1301 3056 7276 18094 39135 A*(h2) 25 73 113 211 363 676 1219 1641

26
**Local search algorithms**

In many optimisation problems, paths are irrelevant; goal state the solution State space = set of "complete" configurations Find configuration satisfying constraints, e.g., n-queens: n queens on an n ×n board with no two queens on the same row, column, or diagonal Use local search algorithms which keep a single "current" state and try to improve it

27
**Hill-climbing search "climbing Everest in thick fog with amnesia”**

we can set up an objective function to be “best” when large (perform hill climbing) …or we can use the previous formulation of heuristic and minimise the objective function (perform gradient descent)

28
Local maxima/minina Problem: depending on initial state, can get stuck in local maxima/minina 1/(1+H(n)) = 1/17 1/(1+H(n)) = 1/2 Local minima

29
**Simulated annealing search**

Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency and range (VSLI layout, scheduling)

30
**Local beam search Keep track of k states rather than just one**

Start with k randomly generated states At each iteration, all the successors of all k states are generated If any one is a goal state, stop; else select the k best successors from the complete list and repeat.

31
**Genetic algorithm search**

A successor state is generated by combining two parent states Start with k randomly generated states (population) A state is represented as a string over a finite alphabet (often a string of 0s and 1s) Evaluation function (fitness function). Higher values for better states. Produce the next generation of states by selection, crossover, and mutation

Similar presentations

© 2019 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google