Presentation is loading. Please wait.

Presentation is loading. Please wait.

Artificial Intelligence Search Problem. Search is a problem-solving technique to explores successive stages in problem- solving process.

Similar presentations


Presentation on theme: "Artificial Intelligence Search Problem. Search is a problem-solving technique to explores successive stages in problem- solving process."— Presentation transcript:

1 Artificial Intelligence Search Problem

2 Search is a problem-solving technique to explores successive stages in problem- solving process.

3 Search Space We need to define a space to search in to find a problem solution To successfully design and implement search algorithm, we must be able to analyze and predict its behavior.

4 State Space Search One tool to analyze the search space is to represent it as space graph, so by use graph theory we analyze the problem and solution of it.

5 Graph Theory A graph consists of a set of nodes and a set of arcs or links connecting pairs of nodes. Island1Island2 River1 River2

6 Graph structure Nodes = {a, b, c, d, e} Arcs = {(a,b), (a,d), (b,c), ….} a d b e c

7 Tree A tree is a graph in which two nodes have at most one path between them. The tree has a root. a b cd efghij

8 Space representation In the space representation of a problem, the nodes of a graph correspond to partial problem solution states and arcs correspond to steps in a problem-solving process

9 Example Let the game of Tic-Tac-toe 123 84 765

10 123 84 765 123 746 582 143 876 582 143 786 562 143 764 582 113 746 582 133 746 582 143 176 582 143 576 782

11 A simple example: traveling on a graph A B C F D E 3 4 4 3 9 2 2 start state goal state

12 Search tree state = A, cost = 0 state = B, cost = 3 state = D, cost = 3 state = C, cost = 5 state = F, cost = 12 state = A, cost = 7 goal state! search tree nodes and states are not the same thing!

13 Full search tree state = A, cost = 0 state = B, cost = 3 state = D, cost = 3 state = C, cost = 5 state = F, cost = 12 state = A, cost = 7 goal state! state = E, cost = 7 state = F, cost = 11 goal state! state = B, cost = 10 state = D, cost = 10......

14 Problem types Deterministic, fully observable  single-state problem – Solution is a sequence of states Non-observable  sensorless problem – Problem-solving may have no idea where it is; solution is a sequence Nondeterministic and/or partially observable Unknown state space

15 Algorithm types There are two kinds of search algorithm – Complete guaranteed to find solution or prove there is none – Incomplete may not find a solution even when it exists often more efficient (or there would be no point)

16 Comparing Searching Algorithms: Will it find a solution? the best one? Def. : A search algorithm is complete if whenever there is at least one solution, the algorithm is guaranteed to find it within a finite amount of time. Def.: A search algorithm is optimal if when it finds a solution, it is the best one

17 Comparing Searching Algorithms: Complexity Def.: The time complexity of a search algorithm is the worst- case amount of time it will take to run, expressed in terms of maximum path length m maximum branching factor b. Def.: The space complexity of a search algorithm is the worst-case amount of memory that the algorithm will use (i.e., the maximum number of nodes on the frontier), also expressed in terms of m and b. Branching factor b of a node is the number of arcs going out of the node

18 Example: the 8-puzzle. Given: a board situation for the 8-puzzle: 138 27 546 123 567 48 Problem: find a sequence of moves that transform this board situation in a desired goal situation:

19 State Space representation In the space representation of a problem, the nodes of a graph correspond to partial problem solution states and arcs correspond to steps (action) in a problem-solving process

20 Key concepts in search Set of states that we can be in – Including an initial state… – … and goal states (equivalently, a goal test) For every state, a set of actions that we can take – Each action results in a new state – Given a state, produces all states that can be reached from it Cost function that determines the cost of each action (or path = sequence of actions) Solution: path from initial state to a goal state – Optimal solution: solution with minimal cost

21 ( NewYork ) ( NewYork, Boston ) Boston ) ( NewYork, Miami ) Miami ) ( NewYork, Dallas ) Dallas ) ( NewYork, Frisco ) Frisco ) ( NewYork, Boston, Boston, Miami ) Miami ) ( NewYork, Frisco, Frisco, Miami ) Miami ) 2501200 1500 2900 0 250120015002900 1450330017006200 Keep track of accumulated costs in each state if you want to be sure to get the best path.

22 Example: Route Finding Initial state – City journey starts in Operators – Driving from city to city Goal test – Is current location the destination city? Liverpool London Nottingham Leeds Birmingham Manchester

23 State space representation (salesman) State: – the list of cities that are already visited Ex.: ( NewYork, Boston ) Initial state: Ex.: ( NewYork ) Rules: – add 1 city to the list that is not yet a member – add the first city if you already have 5 members Goal criterion: – first and last city are equal

24 Example: The 8-puzzle states? locations of tiles actions? move blank left, right, up, down goal? = goal state (given) path cost? 1 per move

25 Example: robotic assembly states?: real-valued coordinates of robot joint angles parts of the object to be assembled actions?: continuous motions of robot joints goal test?: complete assembly path cost?: time to execute

26 123 84 765 13 746 582 143 76 582 143 786 52 143 76 582 13 746 582 13 746 582 43 176 582 143 576 82

27 Example: Chess Problem: develop a program that plays chess 1 2 3 4 5 6 7 8 A B C D E F G H 1. A way to represent board situations Ex.: List: (( king_black, 8, C), ( knight_black, 7, B), ( knight_black, 7, B), ( pawn_black, 7, G), ( pawn_black, 7, G), ( pawn_black, 5, F), ( pawn_black, 5, F), ( pawn_white, 2, H), ( pawn_white, 2, H), ( king_white, 1, E)) ( king_white, 1, E))

28 Chess Move 1 Move 2 Move 3 search tree ~15 ~ (15) 2 ~ (15) 3 Need very efficient search techniques to find good paths in such combinatorial trees.

29 independence of states: Ex.: Blocks world problem. Initially: C is on A and B is on the table. Rules: to move any free block to another or to the table Goal: A is on B and B is on C. A C B Goal: A on B and B on C A C B Goal: A on B A C B Goal: B on C AND-OR-tree? AND

30 Search in State Spaces Effects of moving a block (illustration and list-structure iconic model notation)

31 Avoiding Repeated States In increasing order of effectiveness in reducing size of state space and with increasing computational costs: 1. Do not return to the state you just came from. 2. Do not create paths with cycles in them. 3. Do not generate any state that was ever created before. Net effect depends on frequency of “loops” in state space.

32 Forward versus backward reasoning: Forward reasoning (or Data-driven ): from initial states to goal states.

33 Forward versus backward reasoning: Backward reasoning (or backward chaining / goal-driven ): from goal states to initial states.

34 Data-Driven search It is called forward chaining The problem solver begins with the given facts and a set of legal moves or rules for changing state to arrive to the goal.

35 Goal-Driven Search Take the goal that we want to solve and see what rules or legal moves could be used to generate this goal. So we move backward.

36 Search Implementation In both types of moving search, we must find the path from start state to a goal. We use goal-driven search if – The goal is given in the problem – There exist a large number of rules – Problem data are not given

37 Search Implementation The data-driven search is used if – All or most data are given – There are a large number of potential goals – It is difficult to form a goal

38 Criteria: Sometimes: no way to start from the goal states – because there are too many (Ex.: chess) – because you can ’ t (easily) formulate the rules in 2 directions. 138 27 546 123 567 48 In this case: even the same rules !! Sometimes equivalent:

39 General Search Considerations Given initial state, operators and goal test – Can you give the agent additional information? Uninformed search strategies – Have no additional information Informed search strategies – Uses problem specific information – Heuristic measure (Guess how far from goal)

40 Classical Search Strategies Breadth-first search Depth-first search Bidirectional search Depth-bounded depth first search – like depth first but set limit on depth of search in tree Iterative Deepening search – use depth-bounded search but iteratively increase limit

41 Breadth-first search: Move downward s, level by level, until goal is reached.SA D B D A E C EE BBF D F B FC E A C G G GG FC It explores the space in a level-by-level fashion.

42 Breadth-first search BFS is complete: if a solution exists, one will be found Expand shallowest unexpanded node Implementation: – fringe is a FIFO queue, i.e., new successors go at end

43 Breadth-first search Expand shallowest unexpanded node Implementation: – fringe is a FIFO queue, i.e., new successors go at end

44 Breadth-first search Expand shallowest unexpanded node Implementation: – fringe is a FIFO queue, i.e., new successors go at end

45 Breadth-first search Expand shallowest unexpanded node Implementation: – fringe is a FIFO queue, i.e., new successors go at end

46 Analysis of BFS 46 Def. : A search algorithm is complete if whenever there is at least one solution, the algorithm is guaranteed to find it within a finite amount of time. Is BFS complete? Yes If a solution exists at level l, the path to it will be explored before any other path of length l + 1 impossible to fall into an infinite cycle see this in AISpace by loading “Cyclic Graph Examples” or by adding a cycle to “Simple Tree”

47 Analysis of BFS 47 Is BFS optimal? Yes Def.: A search algorithm is optimal if when it finds a solution, it is the best one E.g., two goal nodes: red boxes Any goal at level l (e.g. red box N 7) will be reached before goals at lower levels

48 Analysis of BFS 48 What is BFS’s time complexity, in terms of m and b ? Def.: The time complexity of a search algorithm is the worst-case amount of time it will take to run, expressed in terms of - maximum path length m - maximum forward branching factor b. O(b m ) Like DFS, in the worst case BFS must examine every node in the tree E.g., single goal node -> red box

49 Analysis of BFS 49 Def.: The space complexity of a search algorithm is the worst case amount of memory that the algorithm will use (i.e., the maximal number of nodes on the frontier), expressed in terms of - maximum path length m - maximum forward branching factor b. O(b m ) What is BFS’s space complexity, in terms of m and b ? -BFS must keep paths to all the nodes al level m

50 Using Breadth-first Search When is BFS appropriate? space is not a problem it's necessary to find the solution with the fewest arcs When there are some shallow solutions there may be infinite paths When is BFS inappropriate? space is limited all solutions tend to be located deep in the tree the branching factor is very large

51 Depth-First Order When a state is examined, all of its children and their descendants are examined before any of its siblings. Not complete (might cycle through nongoal states) Depth- First order goes deeper whenever this is possible.

52 Depth-first search = Chronological backtracking Select a child – convention: left-to-right Repeatedly go to next child, as long as possible. Return to left-over alternatives (higher-up) only when needed. B C E D F G S A

53 Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

54 Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

55 Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

56 Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

57 Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

58 Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

59 Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

60 Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

61 Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

62 Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

63 Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

64 Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

65 Analysis of DFS Is DFS complete?. Is DFS optimal? What is the time complexity, if the maximum path length is m and the maximum branching factor is b ? What is the space complexity? We will look at the answers in AISpace (but see next few slides for a summary of what we do)

66 Analysis of DFS Def. : A search algorithm is complete if whenever there is at least one solution, the algorithm is guaranteed to find it within a finite amount of time. Is DFS complete? No If there are cycles in the graph, DFS may get “stuck” in one of them see this in AISpace by loading “Cyclic Graph Examples” or by adding a cycle to “Simple Tree” e.g., click on “Create” tab, create a new edge from N7 to N1, go back to “Solve” and see what happens

67 Analysis of DFS 67 Is DFS optimal? No Def.: A search algorithm is optimal if when it finds a solution, it is the best one (e.g., the shortest) It can “stumble” on longer solution paths before it gets to shorter ones. E.g., goal nodes: red boxes see this in AISpace by loading “Extended Tree Graph” and set N6 as a goal e.g., click on “Create” tab, right-click on N6 and select “set as a goal node”

68 Analysis of DFS 68 What is DFS’s time complexity, in terms of m and b ? In the worst case, must examine every node in the tree E.g., single goal node -> red box Def.: The time complexity of a search algorithm is the worst-case amount of time it will take to run, expressed in terms of - maximum path length m - maximum forward branching factor b. O(b m )

69 Analysis of DFS 69 Def.: The space complexity of a search algorithm is the worst-case amount of memory that the algorithm will use (i.e., the maximum number of nodes on the frontier), expressed in terms of - maximum path length m - maximum forward branching factor b. O(bm) What is DFS’s space complexity, in terms of m and b ? -for every node in the path currently explored, DFS maintains a path to its unexplored siblings in the search tree -Alternative paths that DFS needs to explore -The longest possible path is m, with a maximum of b-1 alterative paths per node See how this works in

70 DFS is appropriate when. Space is restricted Many solutions, with long path length It is a poor method when There are cycles in the graph There are sparse solutions at shallow depth There is heuristic knowledge indicating when one path is better than another Analysis of DFS (cont.)

71 A CDEFB GHIJKLMNOP QRSTUVWXYZ The example node set Initial state Goal state A L Press space to see a BFS of the example node set

72 A CDEFB GHIJKL QRSTU A BCD We begin with our initial state: the node labeled A. Press space to continue This node is then expanded to reveal further (unexpanded) nodes. Press space Node A is removed from the queue. Each revealed node is added to the END of the queue. Press space to continue the search. The search then moves to the first node in the queue. Press space to continue. Node B is expanded then removed from the queue. The revealed nodes are added to the END of the queue. Press space. Size of Queue: 0 Nodes expanded: 0 Current Action:Current level: n/a Queue: EmptyQueue: ASize of Queue: 1 Nodes expanded: 1 Queue: B, C, D, E, F Press space to begin the search Size of Queue: 5 Current level: 0Current Action: Expanding Queue: C, D, E, F, G, HSize of Queue: 6 Nodes expanded: 2 Current level: 1 We then backtrack to expand node C, and the process continues. Press space Current Action: Backtracking Current level: 0Current level: 1 Queue: D, E, F, G, H, I, JSize of Queue: 7 Nodes expanded: 3 Current Action: ExpandingCurrent Action: Backtracking Current level: 0Current level: 1 Queue: E, F, G, H, I, J, K, LSize of Queue: 8 Nodes expanded: 4 Current Action: ExpandingCurrent Action: Backtracking Current level: 0Current level: 1Current Action: Expanding NM Queue: F, G, H, I, J, K, L, M, NSize of Queue: 9 Nodes expanded: 5 E Current Action: Backtracking Current level: 0Current Action: ExpandingCurrent level: 1 OP Queue: G, H, I, J, K, L, M, N, O, PSize of Queue: 10 Nodes expanded: 6 F Current Action: Backtracking Current level: 0Current level: 1Current level: 2Current Action: Expanding Queue: H, I, J, K, L, M, N, O, P, Q Nodes expanded: 7 G Current Action: Backtracking Current level: 1Current Action: Expanding Queue: I, J, K, L, M, N, O, P, Q, R Nodes expanded: 8 H Current Action: Backtracking Current level: 2Current level: 1Current level: 0Current level: 1Current level: 2Current Action: Expanding Queue: J, K, L, M, N, O, P, Q, R, S Nodes expanded: 9 I Current Action: Backtracking Current level: 1Current level: 2Current Action: Expanding Queue: K, L, M, N, O, P, Q, R, S, T Nodes expanded: 10 J Current Action: Backtracking Current level: 1Current level: 0Current level: 1Current level: 2Current Action: Expanding Queue: L, M, N, O, P, Q, R, S, T, U Nodes expanded: 11 K Current Action: Backtracking Current level: 1 LLLL Node L is located and the search returns a solution. Press space to end. FINISHED SEARCH Queue: EmptySize of Queue: 0 Current level: 2 BREADTH-FIRST SEARCH PATTERN L Press space to continue the search

73 Aside: Internet Search Typically human search will be “incomplete”, E.g. finding information on the internet before google, etc – look at a few web pages, – if no success then give up

74 Example Determine whether data-driven or goal-driven and depth-first or breadth-first would be preferable for solving each of the following – Diagnosing mechanical problems in an automobile – You have met a person who claims to be your distant cousin, with a common ancestor named John. You like to verify her claim – A theorem prover for plane geometry

75 A program for examining sonar readings and interpreting them An expert system that will help a human classify plants by species, genus,etc. Example

76 Any path, versus shortest path, versus best path: Ex.: Traveling salesperson problem: Find a sequence of cities ABCDEA such that the total distance is MINIMAL. Boston Miami NewYork SanFrancisco Dallas3000250 1450 1200 1700 3300 2900 1500 1600 1700

77 77 Bi-directional search IF you are able to EXPLICITLY describe the GOAL state, AND you have BOTH rules for FORWARD reasoning AND BACKWARD reasoning Compute the tree both from the start-node and from a goal node, until these meet. GoalStart

78 Example Search Problem A genetics professor – Wants to name her new baby boy – Using only the letters D,N & A Search through possible strings (states) – D,DN,DNNA,NA,AND,DNAN, etc. – 3 operators: add D, N or A onto end of string – Initial state is an empty string Goal test – Look up state in a book of boys’ names, e.g. DAN

79 G(n) = The cost of each move as the distance between each town H(n) = The Straight Line Distance between any town and town M. A45E32I12M0 B20F23J5 C34G15K40 D25H10L20 10 40 12 10 5 15 20 23 10 5 5 M L K H J I G F E DC B A

80

81 Consider the following search problem. Assume a state is represented as an integer, that the initial state is the number 1, and that the two successors of a state n are the states 2n and 2n+1. For example, the successors of 1 are 2 and 3, the successors of 2 are 4 and 5, the successors of 3 are 6 and 7, etc. Assumes the goal state is the number 12. Consider the following heuristics for evaluating the state n where the goal state is g h1(n) = |n-g| & h2(n) = (g – n) if (n  g) and h2 (n) =  if (n >g) Show the search trees generated for each of the following strategies for the initial state 1 and the goal state 12, numbering the nodes in the order expanded. Depth-first searchb) Breadth-first search c) beast-first with heuristic h1d) A* with heuristic (h1+h2) If any of these strategies get lost and never find the goal, then show the few steps and say "FAILS"


Download ppt "Artificial Intelligence Search Problem. Search is a problem-solving technique to explores successive stages in problem- solving process."

Similar presentations


Ads by Google