Presentation is loading. Please wait.

Presentation is loading. Please wait.

An Introduction to Artificial Intelligence

Similar presentations


Presentation on theme: "An Introduction to Artificial Intelligence"— Presentation transcript:

1 An Introduction to Artificial Intelligence
Lecture 4a: Informed Search and Exploration Ramin Halavati In which we see how information about the state space can prevent algorithms from blundering about the dark.

2 Outline Best-first search Greedy best-first search A* search
Heuristics Local search algorithms Hill-climbing search Simulated annealing search Local beam search Genetic algorithms

3 UNINFORMED? Uninformed:
To search the states graph/tree using Path Cost and Goal Test.

4 INFORMED? Informed: More data about states such as distance to goal.
Best First Search Almost Best First Search Heuristic h(n): estimated cost of the cheapest path from n to goal. h(goal) = 0. Not necessarily guaranteed, but seems fine.

5 Greedy Best First Search
Compute estimated distances to goal. Expand the node which gains the least estimate.

6 Greedy Best First Search Example
Heuristic: Straight Line Distance (HSLD)

7 Greedy Best First Search Example

8 Properties of Greedy Best First Search
Complete? No, can get stuck in loop. Time? O(bm), but a good heuristic can give dramatic improvement Space? O(bm), keeps all nodes in memory Optimal? No, it depends

9 A* search Idea: avoid expanding paths that are already expensive
Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to goal f(n) = estimated total cost of path through n to goal

10 A* search example

11 A* vs Greedy

12 Admissible Heuristics
h(n) is admissible: if for every node n, h(n) ≤ h*(n), h*(n): the true cost from n to goal. Never Overestimates. It’s Optimistic. Example: hSLD(n) (never overestimates the actual road distance)

13 A* is Optimal Theorem: If h(n) is admissible, A* using TREE-SEARCH is optimal TREE-SEARCH: To re-compute the cost of each node, each time you reach it. GRAPH-SEARCH: To store the costs of all nodes, the first time you reach em.

14 Optimality of A* ( proof )
Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(G2) = g(G2) since h(G2) = 0 g(G2) > g(G) since G2 is suboptimal f(G) = g(G) since h(G) = 0 f(G2) > f(G) from above h(n) ≤ h* (n) since h is admissible g(n) + h(n) ≤ g(n) + h*(n) f(n) ≤ f(G) Hence f(G2) > f(n), and A* will never select G2 for expansion

15 Consistent Heuristics
h(n) is consistent if: for every node n, every successor n' of n generated by any action a, h(n) ≤ c(n,a,n') + h(n') Consistency: Monotonicity Triangular Inequality. Usually at no extra cost! Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal

16 Optimality of A* A* expands nodes in order of increasing f value
Gradually adds "f-contours" of nodes Contour i has all nodes with f=fi, where fi < fi+1

17 Properties of A* Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) ) Time? Exponential Space? Keeps all nodes in memory (bd) Optimal? Yes A* prunes all nodes with f(n)>f(Goal). A* is Optimally Efficient.

18 How to Design Heuristics?
E.g., for the 8-puzzle: h1(n) = number of misplaced tiles h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile)

19 Admissible heuristics
h1(n) = Number of misplaced tiles h2(n) = Total Manhattan distance

20 Effective Branching Factor
If A* finds the answer by expanding N nodes, using heuristic h(n), At depth d, b* is effective branching factor if: 1+b*+(b*)2+…+(b*)d = N+1

21 Dominance If h2(n) ≥ h1(n) for all n (both admissible)
then h2 dominates h1. h2 is better for search. h2 is more realistic. h (n)=max(h1(n), h2(n),… ,hm(n)) Heuristic must be efficient.

22 How to Generate Heuristics?
Formal Methods Relaxed Problems Pattern Data Bases Disjoint Pattern Sets Learning ABSOLVER, 1993 A new, better heuristic for 8 puzzle. First heuristic for Rubik’s cube.

23 “Relaxed Problem” Heuristic
A problem with fewer restrictions on the actions. The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem. 8-puzzle: Main Rule: A tile can be moved from square A to B if A is horizontally or vertically adjacent to B and B is empty. Relaxed Rules: A tile can move from square A to square B if A is adjacent to B. (h2) A tile can move from square A to square B if B is blank. A tile can move from square A to square B. (h1)

24 “Sub Problem” Heuristic
The cost to solve a subproblem. It IS admissible.

25 “Pattern Database” Heuristics
To store the exact solution cost to some sub-problems.

26 “Disjoint Pattern” Databases
To add the result of several Pattern-Database heuristics. Speed Up: 103 times for 15-Puzzle and 106 times for 24-Puzzle. Separablity: Rubik’s cube vs. 8-Puzzle.

27 Learning Heuristics from Experience
Machine Learning Techniques. Feature Selection Linear Combinations

28 BACK TO MAIN SEARCH METHOD
What’s wrong with A*? It’s both Optimal and Optimally Efficient. MEMORY

29 Memory Bounded Heuristic Search
Iterative Deepening A* (IDA*) Similar to Iterative Deepening Depth First Search Bounded by f-cost. Memory: b*d

30 Recursive Best First Search
Main Idea: To search a level with limited f-cost, based on other open nodes with continuous update.

31 Recursive Best First Search

32 Recursive Best First Search, Sample

33 Recursive Best First Search, Sample
Complete? Yes, given enough space. Space? b * d Optimal? Yes, if admissible. Time? Hard to analyze. It depends…

34 Memory, more memory… A*: bd IDA*, RBFS: b*d What about exactly 10 MB?

35 Memory-Bounded A* MA* Simplified Memory Bounded A* (SMA*)
To store as many nodes as possible (the A* trend). When memory is full, remove the worst current node and update its parent.

36 SMA* Example

37 SMA* Code

38 To be continued…


Download ppt "An Introduction to Artificial Intelligence"

Similar presentations


Ads by Google