Artificial Intelligence Problem solving by searching CSC 361 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.

Slides:



Advertisements
Similar presentations
Lecture 02 – Part B Problem Solving by Searching Search Methods :
Advertisements

Lights Out Issues Questions? Comment from me.
Review: Search problem formulation
Informed Search Algorithms
Notes Dijstra’s Algorithm Corrected syllabus.
Informed search algorithms
An Introduction to Artificial Intelligence
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
G5BAIM Artificial Intelligence Methods Graham Kendall Blind Searches.
Problem Solving by Searching
Informed search.
SE Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Review: Search problem formulation
Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2002.
A* Search Introduction to AI. What is an A* Search? A greedy search method minimizes the cost to the goal by using an heuristic function, h(n). It works.
Artificial Intelligence
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2004.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
CSC344: AI for Games Lecture 4: Informed search
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
CS 561, Session 6 1 Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Rutgers CS440, Fall 2003 Heuristic search Reading: AIMA 2 nd ed., Ch
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
1 CS 2710, ISSP 2610 Chapter 4, Part 1 Heuristic Search.
2013/10/17 Informed search A* algorithm. Outline 2 Informed = use problem-specific knowledge Which search strategies? Best-first search and its variants.
Problem Solving by Searching Search Methods : informed (Heuristic) search.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Informed Search Methods. Informed Search  Uninformed searches  easy  but very inefficient in most cases of huge search tree  Informed searches  uses.
Informed Search Strategies Lecture # 8 & 9. Outline 2 Best-first search Greedy best-first search A * search Heuristics.
Informed Search Include specific knowledge to efficiently conduct the search and find the solution.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
CSC3203: AI for Games Informed search (1) Patrick Olivier
CMSC 671 Fall 2010 Thu 9/9/10 Uninformed Search (summary) Informed Search Functional Programming (revisited) Prof. Laura Zavala,
Informed Search I (Beginning of AIMA Chapter 4.1)
Artificial Intelligence Problem solving by searching.
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
A General Introduction to Artificial Intelligence.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Pengantar Kecerdasan Buatan 4 - Informed Search and Exploration AIMA Ch. 3.5 – 3.6.
Heuristic Search Foundations of Artificial Intelligence.
Introduction to Artificial Intelligence (G51IAI)
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
G5AIAI Introduction to AI Graham Kendall Heuristic Searches.
Artificial Intelligence Lecture No. 8 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Informed Search 1.
Search Methodologies Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
For Monday Read chapter 4 exercise 1 No homework.
Romania. Romania Problem Initial state: Arad Goal state: Bucharest Operators: From any node, you can visit any connected node. Operator cost, the.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Review: Tree search Initialize the frontier using the starting state
Last time: Problem-Solving
Artificial Intelligence (CS 370D)
Heuristic Search Introduction to Artificial Intelligence
Introduction to Artificial Intelligence
Artificial Intelligence Problem solving by searching CSC 361
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
EA C461 – Artificial Intelligence
Informed search algorithms
Informed search algorithms
Presentation transcript:

Artificial Intelligence Problem solving by searching CSC 361 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi Arabia

Problem Solving by Searching Search Methods : informed (Heuristic) search

3 Using problem specific knowledge to aid searching Without incorporating knowledge into searching, one can have no bias (i.e. a preference) on the search space. Without a bias, one is forced to look everywhere to find the answer. Hence, the complexity of uninformed search is intractable. Search everywhere!!

4 Using problem specific knowledge to aid searching With knowledge, one can search the state space as if he was given “hints” when exploring a maze. Heuristic information in search = Hints Leads to dramatic speed up in efficiency. A BCED FGHIJ KL O M N Search only in this subtree!!

5 More formally, why heuristic functions work? In any search problem where there are at most b choices at each node and a depth of d at the goal node, a naive search algorithm would have to, in the worst case, search around O(b d ) nodes before finding a solution (Exponential Time Complexity). Heuristics improve the efficiency of search algorithms by reducing the effective branching factor from b to (ideally) a low constant b* such thatbranching factor 1 =< b* << b

6 Heuristic Functions A heuristic function is a function f(n) that gives an estimation on the “cost” of getting from node n to the goal state – so that the node with the least cost among all possible choices can be selected for expansion first. Three approaches to defining f: f measures the value of the current state (its “goodness”) f measures the estimated cost of getting to the goal from the current state: f(n) = h(n) where h(n) = an estimate of the cost to get from n to a goal f measures the estimated cost of getting to the goal state from the current state and the cost of the existing path to it. Often, in this case, we decompose f: f(n) = g(n) + h(n) where g(n) = the cost to get to n (from initial state)

7 Approach 1: f Measures the Value of the Current State Usually the case when solving optimization problems Finding a state such that the value of the metric f is optimized Often, in these cases, f could be a weighted sum of a set of component values: N-Queens Example: the number of queens under attack … Data mining Example: the “predictive-ness” (a.k.a. accuracy) of a rule discovered

8 Approach 2: f Measures the Cost to the Goal A state X would be better than a state Y if the estimated cost of getting from X to the goal is lower than that of Y – because X would be closer to the goal than Y 8–Puzzle h 1 : The number of misplaced tiles (squares with number). h 2 : The sum of the distances of the tiles from their goal positions.

9 Approach 3: f measures the total cost of the solution path (Admissible Heuristic Functions) A heuristic function f(n) = g(n) + h(n) is admissible if h(n) never overestimates the cost to reach the goal. Admissible heuristics are “optimistic”: “the cost is not that much …” However, g(n) is the exact cost to reach node n from the initial state. Therefore, f(n) never over-estimate the true cost to reach the goal state through node n. Theorem: A search is optimal if h(n) is admissible. I.e. The search using h(n) returns an optimal solution. Given h 2 (n) > h 1 (n) for all n, it’s always more efficient to use h 2 (n). h 2 is more realistic than h 1 (more informed), though both are optimistic.

10 Traditional informed search strategies Greedy Best first search “Always chooses the successor node with the best f value” where f(n) = h(n) We choose the one that is nearest to the final state among all possible choices A* search Best first search using an “admissible” heuristic function f that takes into account the current cost g Always returns the optimal solution path

Informed Search Strategies Best First Search

12 An implementation of Best First Search function BEST-FIRST-SEARCH (problem, eval-fn) returns a solution sequence, or failure queuing-fn = a function that sorts nodes by eval-fn return GENERIC-SEARCH (problem,queuing-fn)

Informed Search Strategies Greedy Search eval-fn: f(n) = h(n)

14 Greedy Search ABDCEFI GH 80 Start Goal f(n) = h (n) = straight-line distance heuristic StateHeuristic: h(n) A366 B374 C329 D244 E253 F178 G193 H98 I0 140

15 Greedy Search ABDCEFI GH 80 Start Goal f(n) = h (n) = straight-line distance heuristic StateHeuristic: h(n) A366 B374 C329 D244 E253 F178 G193 H98 I0 140

16 Greedy Search ABDCEFI GH 80 Start Goal f(n) = h (n) = straight-line distance heuristic StateHeuristic: h(n) A366 B374 C329 D244 E253 F178 G193 H98 I0 140

17 Greedy Search ABDCEFI GH 80 Start Goal f(n) = h (n) = straight-line distance heuristic StateHeuristic: h(n) A366 B374 C329 D244 E253 F178 G193 H98 I0 140

18 Greedy Search ABDCEFI GH 80 Start Goal f(n) = h (n) = straight-line distance heuristic StateHeuristic: h(n) A366 B374 C329 D244 E253 F178 G193 H98 I0 140

19 Greedy Search ABDCEFI GH 80 Start Goal f(n) = h (n) = straight-line distance heuristic StateHeuristic: h(n) A366 B374 C329 D244 E253 F178 G193 H98 I0 140

20 Greedy Search ABDCEFI GH 80 Start Goal f(n) = h (n) = straight-line distance heuristic StateHeuristic: h(n) A366 B374 C329 D244 E253 F178 G193 H98 I0 140

21 Greedy Search ABDCEFI GH 80 Start Goal f(n) = h (n) = straight-line distance heuristic StateHeuristic: h(n) A366 B374 C329 D244 E253 F178 G193 H98 I0 140

22 Greedy Search ABDCEFI GH 80 Start Goal f(n) = h (n) = straight-line distance heuristic StateHeuristic: h(n) A366 B374 C329 D244 E253 F178 G193 H98 I0 140

23 Greedy Search ABDCEFI GH 80 Start Goal f(n) = h (n) = straight-line distance heuristic StateHeuristic: h(n) A366 B374 C329 D244 E253 F178 G193 H98 I0 140

24 Greedy Search: Tree Search A Start

25 Greedy Search: Tree Search ABCE Start [374] [329] [253]

26 Greedy Search: Tree Search ABCEF 99 GA 80 Start [374] [329] [253] [193] [366] [178]

27 Greedy Search: Tree Search ABCEFI GA 80 Start Goal [374] [329] [253] [193] [366] [178] E [0] [253]

28 Greedy Search: Tree Search ABCEFI GA 80 Start Goal [374] [329] [253] [193] [366] [178] E [0] [253] Path cost(A-E-F-I) = = 431 dist(A-E-F-I) = = 450

29 Greedy Search: Optimal ? ABDCEFI GH 80 Start Goal f(n) = h (n) = straight-line distance heuristic dist(A-E-G-H-I) = =418 StateHeuristic: h(n) A366 B374 C329 D244 E253 F178 G193 H98 I0 140

30 Greedy Search: Complete ? ABDCEFI GH 80 Start Goal f(n) = h (n) = straight-line distance heuristic StateHeuristic: h(n) A366 B374 ** C250 D244 E253 F178 G193 H98 I0 140

31 Greedy Search: Tree Search A Start

32 Greedy Search: Tree Search ABCE Start [374] [250] [253]

33 Greedy Search: Tree Search ABCED 111 Start [374] [250] [253] [244]

34 Greedy Search: Tree Search ABCED 111 Start [374] [250] [253] [244] C [250] Infinite Branch !

35 Greedy Search: Tree Search ABCED 111 Start [374] [250] [253] [244] CD [250] [244] Infinite Branch !

36 Greedy Search: Tree Search ABCED 111 Start [374] [250] [253] [244] CD [250] [244] Infinite Branch !

37 Greedy Search: Time and Space Complexity ? ABDCEFI GH 80 Start Goal Greedy search is not optimal. Greedy search is incomplete without systematic checking of repeated states. In the worst case, the Time and Space Complexity of Greedy Search are both O(b m ) Where b is the branching factor and m the maximum path length

Informed Search Strategies A* Search eval-fn: f(n)=g(n)+h(n)

39 A* (A Star) Greedy Search minimizes a heuristic h(n) which is an estimated cost from a node n to the goal state. However, although greedy search can considerably cut the search time (efficient), it is neither optimal nor complete. Uniform Cost Search minimizes the cost g(n) from the initial state to n. UCS is optimal and complete but not efficient. New Strategy: Combine Greedy Search and UCS to get an efficient algorithm which is complete and optimal.

40 A* (A Star) A* uses a heuristic function which combines g(n) and h(n): f(n) = g(n) + h(n) g(n) is the exact cost to reach node n from the initial state. Cost so far up to node n. h(n) is an estimation of the remaining cost to reach the goal.

41 A* (A Star) n g(n) h(n) f(n) = g(n)+h(n)

42 A* Search f(n) = g(n) + h (n) g(n): is the exact cost to reach node n from the initial state. StateHeuristic: h(n) A366 B374 C329 D244 E253 F178 G193 H98 I0 A B D C E F I G H 80 Start Goal

43 A* Search: Tree Search A Start

44 A* Search: Tree Search ABCE Start [393] [449] [447]

45 A* Search: Tree Search ABCEF 99 G 80 Start [393] [449] [447] [417] [413]

46 A* Search: Tree Search ABCEF 99 G 80 Start [393] [449] [447] [417] [413] H 97 [415]

47 A* Search: Tree Search ABCEFI 99 GH 80 Start [393] [449] [447] [417] [413] [415] Goal [418]

48 A* Search: Tree Search ABCEFI 99 GH 80 Start [393] [449] [447] [417] [413] [415] Goal [418] I [450]

49 A* Search: Tree Search ABCEFI 99 GH 80 Start [393] [449] [447] [417] [413] [415] Goal [418] I [450]

50 A* Search: Tree Search ABCEFI 99 GH 80 Start [393] [449] [447] [417] [413] [415] Goal [418] I [450]

A* with h() not Admissible h() overestimates the cost to reach the goal state

52 A* Search: h not admissible ! ABDCEFI GH 80 Start Goal f(n) = g(n) + h (n) – (H-I) Overestimated g(n): is the exact cost to reach node n from the initial state. StateHeuristic: h(n) A366 B374 C329 D244 E253 F178 G193 H138 I0 140

53 A* Search: Tree Search A Start

54 A* Search: Tree Search ABCE Start [393] [449] [447]

55 A* Search: Tree Search ABCEF 99 G 80 Start [393] [449] [447] [417] [413]

56 A* Search: Tree Search ABCEF 99 G 80 Start [393] [449] [447] [417] [413] H 97 [455]

57 A* Search: Tree Search ABCEF 99 GH 80 Start [393] [449] [447] [417] [413] [455]Goal I [450]

58 A* Search: Tree Search ABCEF 99 GH 80 Start [393] [449] [447] [417] [413] [455]Goal I [450] D [473]

59 A* Search: Tree Search ABCEF 99 GH 80 Start [393] [449] [447] [417] [413] [455]Goal I [450] D [473]

60 A* Search: Tree Search ABCEF 99 GH 80 Start [393] [449] [447] [417] [413] [455]Goal I [450] D [473]

61 A* Search: Tree Search ABCEF 99 GH 80 Start [393] [449] [447] [417] [413] [455]Goal I [450] D [473] A* not optimal !!!

A* Algorithm A* with systematic checking for repeated states …

63 A* Algorithm 1. Search queue Q is empty. 2. Place the start state s in Q with f value h(s). 3. If Q is empty, return failure. 4. Take node n from Q with lowest f value. (Keep Q sorted by f values and pick the first element). 5. If n is a goal node, stop and return solution. 6. Generate successors of node n. 7. For each successor n’ of n do: a) Compute f(n’) = g(n) + cost(n,n’) + h(n’). b) If n’ is new (never generated before), add n’ to Q. c) If node n’ is already in Q with a higher f value, replace it with current f(n’) and place it in sorted order in Q. End for 8. Go back to step 3.

64 A* Search: Analysis ABDCEFI GH 80 Start Goal A* is complete except if there is an infinity of nodes with f < f(G). A* is optimal if heuristic h is admissible. Time complexity depends on the quality of heuristic but is still exponential. For space complexity, A* keeps all nodes in memory. A* has worst case O(b d ) space complexity, but an iterative deepening version is possible (IDA*).

A* Algorithm A* with systematic checking for repeated states An Example: Map Searching

66 SLD Heuristic: h() Straight Line Distances to Bucharest TownSLD Arad366 Bucharest0 Craiova160 Dobreta242 Eforie161 Fagaras178 Giurgiu77 Hirsova151 Iasi226 Lugoj244 TownSLD Mehadai241 Neamt234 Oradea380 Pitesti98 Rimnicu193 Sibiu253 Timisoara329 Urziceni80 Vaslui199 Zerind374 We can use straight line distances as an admissible heuristic as they will never overestimate the cost to the goal. This is because there is no shorter distance between two cities than the straight line distance. Press space to continue with the slideshow.

67 Distances Between Cities

Greedy Search in Action … Map Searching

69 Press space to see an Greedy search of the Romanian map featured in the previous slide. Note: Throughout the animation all nodes are labelled with f(n) = h(n). However,we will be using the abbreviations f, and h to make the notation simpler Oradea Zerind Fagaras Sibiu Rimnicu Timisoara Arad We begin with the initial state of Arad. The straight line distance from Arad to Bucharest (or h value) is 366 miles. This gives us a total value of ( f = h ) 366 miles. Press space to expand the initial state of Arad. F= 366 F= 374 F= 253 F= 329 Once Arad is expanded we look for the node with the lowest cost. Sibiu has the lowest value for f. (The straight line distance from Sibiu to the goal state is 253 miles. This gives a total of 253 miles). Press space to move to this node and expand it. We now expand Sibiu (that is, we expand the node with the lowest value of f ). Press space to continue the search. F= 178 F= 380 F= 193 We now expand Fagaras (that is, we expand the node with the lowest value of f ). Press space to continue the search. Bucharest F= 0

A* in Action … Map Searching

71 Press space to see an A* search of the Romanian map featured in the previous slide. Note: Throughout the animation all nodes are labelled with f(n) = g(n) + h(n). However,we will be using the abbreviations f, g and h to make the notation simpler Oradea Zerind Fagaras Pitesti Sibiu Craiova Rimnicu Timisoara Bucharest Arad We begin with the initial state of Arad. The cost of reaching Arad from Arad (or g value) is 0 miles. The straight line distance from Arad to Bucharest (or h value) is 366 miles. This gives us a total value of ( f = g + h ) 366 miles. Press space to expand the initial state of Arad. F= F= 366 F= F= 449 F= F= 393 F= F= 447 Once Arad is expanded we look for the node with the lowest cost. Sibiu has the lowest value for f. (The cost to reach Sibiu from Arad is 140 miles, and the straight line distance from Sibiu to the goal state is 253 miles. This gives a total of 393 miles). Press space to move to this node and expand it. We now expand Sibiu (that is, we expand the node with the lowest value of f ). Press space to continue the search. F= F= 417 F= F= 671 F= F= 413 We now expand Rimnicu (that is, we expand the node with the lowest value of f ). Press space to continue the search. F= F= 415 F= F= 526 Once Rimnicu is expanded we look for the node with the lowest cost. As you can see, Pitesti has the lowest value for f. (The cost to reach Pitesti from Arad is 317 miles, and the straight line distance from Pitesti to the goal state is 98 miles. This gives a total of 415 miles). Press space to move to this node and expand it. We now expand Pitesti (that is, we expand the node with the lowest value of f ). Press space to continue the search. We have just expanded a node (Pitesti) that revealed Bucharest, but it has a cost of 418. If there is any other lower cost node (and in this case there is one cheaper node, Fagaras, with a cost of 417) then we need to expand it in case it leads to a better solution to Bucharest than the 418 solution we have already found. Press space to continue. F= F= 418 In actual fact, the algorithm will not really recognise that we have found Bucharest. It just keeps expanding the lowest cost nodes (based on f ) until it finds a goal state AND it has the lowest value of f. So, we must now move to Fagaras and expand it. Press space to continue. We now expand Fagaras (that is, we expand the node with the lowest value of f ). Press space to continue the search. Bucharest (2) F= F= 450 Once Fagaras is expanded we look for the lowest cost node. As you can see, we now have two Bucharest nodes. One of these nodes ( Arad – Sibiu – Rimnicu – Pitesti – Bucharest ) has an f value of 418. The other node (Arad – Sibiu – Fagaras – Bucharest (2) ) has an f value of 450. We therefore move to the first Bucharest node and expand it. Press space to continue Bucharest We have now arrived at Bucharest. As this is the lowest cost node AND the goal state we can terminate the search. If you look back over the slides you will see that the solution returned by the A* search pattern ( Arad – Sibiu – Rimnicu – Pitesti – Bucharest ), is in fact the optimal solution. Press space to continue with the slideshow.

Informed Search Strategies Iterative Deepening A*

73 Iterative Deepening A*:IDA* Use f (N) = g(N) + h(N) with admissible and consistent h Each iteration is depth-first with cutoff on the value of f of expanded nodes

74 Consistent Heuristic The admissible heuristic h is consistent (or satisfies the monotone restriction) if for every node N and every successor N’ of N: h(N)  c(N,N’) + h(N’) (triangular inequality) A consistent heuristic is admissible. N N’ h(N) h(N’) c(N,N’)

75 IDA* Algorithm In the first iteration, we determine a “f-cost limit” – cut-off value f(n 0 ) = g(n 0 ) + h(n 0 ) = h(n 0 ), where n 0 is the start node. We expand nodes using the depth-first algorithm and backtrack whenever f(n) for an expanded node n exceeds the cut-off value. If this search does not succeed, determine the lowest f-value among the nodes that were visited but not expanded. Use this f-value as the new limit value – cut-off value and do another depth-first search. Repeat this procedure until a goal node is found.

76 8-Puzzle 4 6 f(N) = g(N) + h(N) with h(N) = number of misplaced tiles Cutoff=4

77 8-Puzzle Cutoff=4 6 f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

78 8-Puzzle Cutoff=4 6 5 f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

79 8-Puzzle Cutoff= f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

Puzzle 4 6 Cutoff= f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

81 8-Puzzle 4 6 Cutoff=5 f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

82 8-Puzzle Cutoff=5 6 f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

83 8-Puzzle Cutoff=5 6 5 f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

84 8-Puzzle Cutoff= f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

85 8-Puzzle Cutoff= f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

86 8-Puzzle Cutoff= f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

87 8-Puzzle Cutoff= f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

88 When to Use Search Techniques The search space is small, and There are no other available techniques, or It is not worth the effort to develop a more efficient technique The search space is large, and There is no other available techniques, and There exist “good” heuristics

89 Conclusions Frustration with uninformed search led to the idea of using domain specific knowledge in a search so that one can intelligently explore only the relevant part of the search space that has a good chance of containing the goal state. These new techniques are called informed (heuristic) search strategies. Even though heuristics improve the performance of informed search algorithms, they are still time consuming especially for large size instances.