Download presentation
Presentation is loading. Please wait.
Published byLee Oliver Modified over 9 years ago
2
Problem Solving as Search Foundations of Artificial Intelligence
3
2 Search and Knowledge Representation Goal-based and utility-based agents require representation of: states within the environment actions and effects (effect of an action is transition from the current state to another state) goals utilities Problems can often be formulated as a search problem to satisfy a goal, agent must find a sequence of actions (a path in the state-space graph) from the starting state to a goal state. To do this efficiently, agents must have the ability to reason with their knowledge about the world and the problem domain which path to follow (which action to choose from) next how to determine if a goal state is reached OR how decide if a satisfactory state has been reached.
4
Foundations of Artificial Intelligence 3 Introduction to Search Search is one of the most powerful approaches to problem solving in AI Search is a universal problem solving mechanism that Systematically explores the alternatives Finds the sequence of steps towards a solution Problem Space Hypothesis (Allen Newell, SOAR: An Architecture for General Intelligence.) All goal-oriented symbolic activities occur in a problem space Search in a problem space is claimed to be a completely general model of intelligence
5
Foundations of Artificial Intelligence 4 Problem-Solving Agents The agent follows a simple “formulate, search, execute” design function Simple-Problem-Solving-Agent(percept) returns action inputs: p, a percept static: s, an action sequence, initially empty state, a description of current world state g, a goal, initially null problem, a problem formulation state Update-State(state, p) if s = empty then g Formulate-Goal(state) problem Formulate-Problem(state, g) s Search(problem) endif action first(s) s remainder(s) return action function Simple-Problem-Solving-Agent(percept) returns action inputs: p, a percept static: s, an action sequence, initially empty state, a description of current world state g, a goal, initially null problem, a problem formulation state Update-State(state, p) if s = empty then g Formulate-Goal(state) problem Formulate-Problem(state, g) s Search(problem) endif action first(s) s remainder(s) return action Assumptions on Environment Static: formulating and solving the problem does not take any changes into account Discrete: enumerating all alternative courses of action Deterministic: actions depend only on previous actions Observable: initial state is completely known
6
Foundations of Artificial Intelligence 5 Stating a Problem as a Search Problem State space S Successor function: x in S SUCCESSORS (x) Cost of a move Initial state s 0 Goal test: for state x in S GOAL? (x) =T or F S 1 3 2
7
Foundations of Artificial Intelligence 6 Example (Romania) Initial Situation On Holiday in Romania; currently in Arad Flight leaves tomorrow from Bucharest Formulate Goal be in Bucharest Formulate Problem states: various cities operators: drive between cities Find Solution sequence of cities must start at starting state and end in the goal state
8
Foundations of Artificial Intelligence 7 Example (Romania)
9
Foundations of Artificial Intelligence 8 Example: Vacuum World Vacuum World Let the world be consist two rooms Each room may contain dirt The agent may be in either room initial: both rooms dirty goal: both rooms clean problem: states: each state has two rooms which may contain dirt (8 possible) actions: go from room to room; vacuum the dirt Solution: sequence of actions leading to clean rooms Vacuum World Let the world be consist two rooms Each room may contain dirt The agent may be in either room initial: both rooms dirty goal: both rooms clean problem: states: each state has two rooms which may contain dirt (8 possible) actions: go from room to room; vacuum the dirt Solution: sequence of actions leading to clean rooms
10
Foundations of Artificial Intelligence 9 Problem Types Deterministic, fully-observable ==> single-state problem agent has enough info. to know exactly which state it is in outcome of actions are known Deterministic, partially-observable ==> multiple-state problem “sensorless problem”: Limited/no access to the world state; agent may have no idea which state it is in require agent to reason about sets of states it can reach Nondeterministic, partially-observable ==> contingency problem must use sensors during execution; percepts provide new information about current state no fixed action that guarantees a solution (must compute the whole tree) often interleave search, execution Unknown State Space ==> exploration problem (“online”) only hope is to use learning (reinforcement learning) to determine potential results of actions, and information about states
11
Foundations of Artificial Intelligence 10 Example: Vacuum World Single-State start in #5. Solutions? Multiple-State start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8}. Solutions? Contingency Start in #5 e.g., Suck can dirty a clean carpet Local sensing: dirt, location only. Solutions? Goal states
12
Foundations of Artificial Intelligence 11 Single-state problem formulation A problem is defined by four items: initial state e.g., ``at Arad'' operators (or successor function S(x)) e.g., Arad ==> ZerindArad ==> Sibiu goal test, can be explicit, e.g., x = ``at Bucharest'' implicit, e.g., NoDirt(x) path cost (additive) e.g., sum of distances, number of operators executed, etc. A solution is a sequence of operators leading from the initial state to a goal state
13
Foundations of Artificial Intelligence 12 Selecting a state space Real world is absurdly complex state space must be abstracted for problem solving (Abstract) state = set of real states (Abstract) operator = complex combination of real actions e.g., “Arad ==> Zerind” represents a complex set of possible routes, detours, rest stops, etc. For guaranteed realizability, any real state “in Arad” must get to some real state “in Zerind” (Abstract) solution = set of real paths that are solutions in the real world Each abstract action should be “easier” than the original problem!
14
Foundations of Artificial Intelligence 13 Example: Vacuum World States? integer dirt and robot locations (ignore dirt amounts) Operators? Left, Right, Suck Goal Test? no dirt Path Cost? one per move What if the agent had no sensors: the multiple-state problem Goal states
15
Foundations of Artificial Intelligence 14 Example: The 8-Puzzle States?integer location of tiles Operators? move blank left, right, up, down Goal Test?= goal state (given) Path Cost?One per move Note: optimal solution of n-Puzzle problem is NP-hard
16
Foundations of Artificial Intelligence 15 8-Puzzle: Successor Function 1 2 34 56 7 8 1 2 34 5 6 78 1 2 34 56 78 1 2 34 56 78
17
Foundations of Artificial Intelligence 16 State-Space Graph The state-space graph is a representation of all possible legal configurations of the problem resulting from applications of legal operators each node in the graph is a representation a possible legal state each directed edge is a representation of a possible legal move applied to a state (resulting in a new state of the problem) States: representation of states should provide all information necessary to describe relevant features of a problem state Operators: Operators may be simple functions representing legal actions; Operators may be rules specifying an action given that a condition (set of constraints) on the current state is satisfied In the latter case, the rules are sometimes referred to as “production rules” and the system is referred to as a production system This is the case with simple reflex agents.
18
Foundations of Artificial Intelligence 17 Vacuum World State-Space Graph State-space graph does not include initial or goal states Search Problem: Given specific initial and goal states, find a path in the graph from an initial to a goal state An instance of a search problem can be represented as a “search tree” whose root note is the initial state
19
Foundations of Artificial Intelligence 18
20
Foundations of Artificial Intelligence 19 Solution to the Search Problem A solution is a path connecting the initial to a goal node (any one) The cost of a path is the sum of the edge costs along this path An optimal solution is a solution path of minimum cost There might be no solution !
21
Foundations of Artificial Intelligence 20
22
Foundations of Artificial Intelligence 21 State Spaces Can be Very Large 8-puzzle 9! = 362,880 states 15-puzzle 16! ~ 1.3 x 10 12 states 24-puzzle 25! ~ 10 25 states
23
Foundations of Artificial Intelligence 22 Searching the State Space Often it is not feasible to build a complete representation of the state graph A problem solver must construct a solution by exploring a small portion of the graph For a specific search problem (with a given initial and goal state) we can view the relevant portion as a search tree
24
Foundations of Artificial Intelligence 23 Searching the State Space
25
Foundations of Artificial Intelligence 24 Searching the State Space Search tree
26
Foundations of Artificial Intelligence 25 Searching the State Space Search tree
27
Foundations of Artificial Intelligence 26 Searching the State Space Search tree
28
Foundations of Artificial Intelligence 27 Searching the State Space Search tree
29
Foundations of Artificial Intelligence 28 Searching the State Space Search tree
30
Foundations of Artificial Intelligence 29... Portion of Search Space for an Instance of the 8-Puzzle Problem
31
Foundations of Artificial Intelligence 30 Simple Problem-Solving Agent Algorithm s0 sense/read initial state GOAL? select/read goal test Succ select/read successor function solution search(s0, GOAL?, Succ) perform(solution)
32
Foundations of Artificial Intelligence 31 Example: Blocks World Problem World consists of blocks A, B, C, and the Floor Can move a block that is “clear” on top of another clear block or onto the Floor State representation: using the predicate “on(x,y)” on(x,y) means the block x is on top of block y on(x, Floor) means block x is on the Floor on(_, x) means block x has nothing on it (it is “clear”) Can specify operators as a set of production rules: 1. on(_, x) on (x, Floor) 2. on(_, x) and on(_, y) on(x, y) Initial state: some initial configuration E.g., on(A, Floor) and on(C, A) and on(B, Floor) and on(_, B) and on(_, A) Goal state: some specified configuration E.g., on(B,C) and on(A,B)
33
Foundations of Artificial Intelligence 32 Blocks World: State-Space Graph A BC ABC A B C A B C AC B B A C C A B C B A A C B C A B B A C C B A B C A 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 on(_, x) on (x, Floor) on(_, x) and on(_, y) on(x, y)
34
Foundations of Artificial Intelligence 33 Blocks World: A Search Problem A CB A C B A BC ABC A B C A B C AC B B A C C A B C B A A C B C A B B A C C B A B C A Search tree for the problem Notes: 1.Repeated states have been eliminated in diagram. 2.The highlighted path represents (in this case) the only solution for this instance of the problem. 3.The solution is a sequence of legal actions: move(A, Floor) move(B, C) move(A, B). Notes: 1.Repeated states have been eliminated in diagram. 2.The highlighted path represents (in this case) the only solution for this instance of the problem. 3.The solution is a sequence of legal actions: move(A, Floor) move(B, C) move(A, B).
35
Foundations of Artificial Intelligence 34 Some Other Problems
36
Foundations of Artificial Intelligence 35 8-Queens Problem Place 8 queens in a chessboard so that no two queens are in the same row, column, or diagonal. A solutionNot a solution
37
Foundations of Artificial Intelligence 36 Formulation #1 States: all arrangements of 0, 1, 2,..., or 8 queens on the board Initial state: 0 queen on the board Successor function: each of the successors is obtained by adding one queen in an empty square Arc cost: irrelevant Goal test: 8 queens are on the board, with no two of them attacking each other 64 x 63 x... x 53 ~ 3 x 10 14 states
38
Foundations of Artificial Intelligence 37 Formulation #2 States: all arrangements of k = 0, 1, 2,..., or 8 queens in the k leftmost columns with no two queens attacking each other Initial state: 0 queen on the board Successor function: each successor is obtained by adding one queen in any square that is not attacked by any queen already in the board, in the leftmost empty column Arc cost: irrelevant Goal test: 8 queens are on the board 2,057 states
39
Foundations of Artificial Intelligence 38 Path Planning What is the state space ?
40
Foundations of Artificial Intelligence 39 Formulation #1 Cost of one horizontal/vertical step = 1 Cost of one diagonal step = 2
41
Foundations of Artificial Intelligence 40 Optimal Solution This path is the shortest in the discretized state space, but not in the original continuous space
42
Foundations of Artificial Intelligence 41 Formulation #2 Cost of one step: length of segment Visibility graph
43
Foundations of Artificial Intelligence 42 Formulation #2 Cost of one step: length of segment Visibility graph
44
Foundations of Artificial Intelligence 43 Solution Path The shortest path in this state space is also the shortest in the original continuous space
45
Foundations of Artificial Intelligence 44 Search Strategies Uninformed (blind, exhaustive) strategies use only the information available in the problem definition Breadth-first search Depth-first search Uniform-cost search Heuristic strategies use “rules of thumb” based on the knowledge of domain to pick between alternatives at each step Graph Searching Applet: http://www.cs.ubc.ca/labs/lci/CIspace/Version4/search/index.html
46
Foundations of Artificial Intelligence 45 function General-Search(problem, Queuing-Fn) returns a solution, or failure nodes Make-Queue(Make-Node(Initial-State[problem])) loop do if nodes = empty then return failure nodes Remove-Front(nodes) if Goal-Test[problem] applied to State[node] succeeds then return node else nodes Queuing-Fn(nodes, Expand(node, Operators[problem])) return function General-Search(problem, Queuing-Fn) returns a solution, or failure nodes Make-Queue(Make-Node(Initial-State[problem])) loop do if nodes = empty then return failure nodes Remove-Front(nodes) if Goal-Test[problem] applied to State[node] succeeds then return node else nodes Queuing-Fn(nodes, Expand(node, Operators[problem])) return Implementation of Search Algorithms A state is a representation of a physical configuration A node is a data structure constituting part of a search tree includes parent, children, depth, or path cost States don’t have parents, children, depth, or path cost The Expand function creates new nodes, filling in various fields and using Operators (or SucessorFn) of the problem to create the corresponding states
47
Foundations of Artificial Intelligence 46 Search Strategies A strategy is defined by picking the order of node expansion i.e., how expanded nodes are inserted into the queue Strategies are evaluated along the following dimensions completeness - does it always find a solution if one exists time complexity - number of nodes generated / expanded space complexity - maximum number of nodes in memory optimality - does it always find a least-cost solution Time and space complexity are measured in terms of: b - maximum branching factor of the search tree d - depth of the least-cost solution m - maximum depth of the state space (may be )
48
Foundations of Artificial Intelligence 47 Recall: Searching the State Space Search tree Note that some states are visited multiple times
49
Foundations of Artificial Intelligence 48 Search Nodes States 1 2 34 56 7 8 1 2 34 56 7 8 1 2 34 56 78 1 3 56 8 1 3 4 56 7 8 2 47 2 1 2 34 56 7 8
50
Foundations of Artificial Intelligence 49 Search Nodes States 1 2 34 56 7 8 1 2 34 56 7 8 1 2 34 56 78 1 3 56 8 1 3 4 56 7 8 2 47 2 1 2 34 56 7 8 If states are allowed to be revisited, the search tree may be infinite even when the state space is finite If states are allowed to be revisited, the search tree may be infinite even when the state space is finite
51
Foundations of Artificial Intelligence 50 Data Structure of a Node PARENT-NODE 1 2 34 56 7 8 STATE Depth of a node N = length of path from root to N (Depth of the root = 0) BOOKKEEPING 5Path-Cost 5Depth RightAction Expanded yes... CHILDREN
52
Foundations of Artificial Intelligence 51 Node expansion The expansion of a node N of the search tree consists of: Evaluating the successor function on STATE(N) Generating a child of N for each state returned by the function
53
Foundations of Artificial Intelligence 52 Basic Search Procedure 1. Start with the start node (root of the search tree) and place in on the queue 2. Remove the front node in the queue and If the node is a goal node, then we are done; stop. Otherwise expand the node generate its children using the successor function (other states that can be reached with one move) 3. Place the children on the queue according to the search strategy 4. Go back to step 2.
54
Foundations of Artificial Intelligence 53 Search Strategies Search strategies differ based on the order in which new successor nodes are added to the queue Breadth-first add nodes to the end of the queue Depth-first add nodes to the front Uniform cost sort the nodes on the queue based on the cost of reaching the node from start node
55
Foundations of Artificial Intelligence 54 Breadth-First Search 1 23 4 56 78 9 10 11 12 13 14 goal
56
Foundations of Artificial Intelligence 55 Example (Romania)
57
Foundations of Artificial Intelligence 56 Breadth-First Search Always expand the shallowest unexpanded node QueuingFN = insert successor at the end of the queue Arad Sibiu Timisoara Zerind
58
Foundations of Artificial Intelligence 57 Breadth-First Search Arad Sibiu Timisoara Zerind OradeaArad
59
Foundations of Artificial Intelligence 58 Breadth-First Search Arad Sibiu Timisoara Zerind OradeaFagarasArad Rimnicu Vilcea Oradea Arad
60
Foundations of Artificial Intelligence 59 Breadth-First Search Arad Sibiu TimisoaraZerind OradeaFagarasArad Rimnicu Vilcea Oradea Arad Lugoi Arad
61
Foundations of Artificial Intelligence 60 goal Depth d = 4 Branching factor b = 2 No. of nodes examined through level 3 (d-1) = 1 + 2 + 2 2 + 2 3 = 1 + 2 + 4 + 8 = 15 Avg. no. of nodes examined at level 4 = (1 + 2 4 ) / 2 (min = 1, max = 2 4 )
62
Foundations of Artificial Intelligence 61 Breadth-First Search Space complexity: Full tree at depth d uses b d memory nodes If you know there is a goal at depth d, you are done; otherwise have to store the nodes at depth d+1 as you generate them; so might need b d+1 memory nodes Nodes examined: (assume tree has depth d with a single goal node at that depth) for large b, this is O(b d ) (the fringe dominates) Number of internal nodes before reaching goal at depth d Number of internal nodes before reaching goal at depth d Average number of nodes examined at the fringe ( at depth d) Average number of nodes examined at the fringe ( at depth d)
63
Foundations of Artificial Intelligence 62 Properties of Breadth-First Search Complete? Yes, if b is finite Time Complexity? 1 + b + b 2 + b 3 +... + b d = O (b d ) Space Complexity? O (b d ) (keeps every node in memory) Optimal? Yes (if cost = 1 per step); but, not optimal in general Note: biggest problem in BFS is the space complexity
64
Foundations of Artificial Intelligence 63 Breadth-First Search: Time and Space Complexity Assume branching factor b=10; 1000 nodes/second; 100 bytes/node
65
Foundations of Artificial Intelligence 64 1 28 13 3 7 9 12 14 4 10 11 15 5 6 goal Depth-First Search
66
Foundations of Artificial Intelligence 65 Depth-First Search Always expand the deepestest unexpanded node QueuingFN = insert successor at the front of the queue Arad Sibiu Timisoara Zerind Oradea Arad
67
Foundations of Artificial Intelligence 66 Depth-First Search Arad Sibiu Timisoara Zerind Sibiu Zerind Oradea Arad Timisoara Note that DFS can perform infinite cyclic excursions. Need a finite, non-cyclic search space, or repeated state-checking.
68
Foundations of Artificial Intelligence 67 goal Depth d = 4 Branching factor b = 2 Highlighted nodes are those that have to be kept in memory. In best case, we examine d + 1 = 5 nodes. In worst case, need all the nodes = 1 + 2 + 4 + 8 + 16 (b d ) = 31 Best case in Depth-First Search: Goal node is on the far left. worst case
69
Foundations of Artificial Intelligence 68 Depth-First Search Space complexity: (assume tree has depth d with a single goal node at that depth) The most memory is needed at the first point we reach depth d Need to store b-1 nodes at each depth (siblings of the node already expanded) with one additional node at depth d (since it hasn’t been expanded yet) Total space = d(b-1) + 1 (the 1 additional node is for the goal at depth d) Nodes examined: (assume tree has depth d with a single goal node at that depth) Best case (goal is at far left) ==> d +1 nodes Worst case ==> Average case ==> for large b, this O(b d ) (the fringe dominates)
70
Foundations of Artificial Intelligence 69 Properties of Depth-First Search Complete? No; fails in infinite-depth spaces, spaces with loops need to modify the algorithm to avoid repeated states along paths Time Complexity? O (b m ): terrible if m is much larger than d but, if solutions are dense, may be much faster that BFS Space Complexity? O (bm) (i.e., linear space) Optimal? No
71
Foundations of Artificial Intelligence 70 Iterative Deepening Depth-Limited Search = depth-first search with depth limit l Nodes at depth l have no successors function Iterative-Deepening-Search(problem, Queuing-Fn) returns a solution sequence inputs: problem for depth 0 to do result Depth-Limited-Search(problem, depth) if result cutoff then return result end function Iterative-Deepening-Search(problem, Queuing-Fn) returns a solution sequence inputs: problem for depth 0 to do result Depth-Limited-Search(problem, depth) if result cutoff then return result end
72
Foundations of Artificial Intelligence 71 Iterative Deepening Arad Sibiu Timisoara Zerind l = 0 l = 1 steps 1 and 2 l = 1 steps 1 and 2
73
Foundations of Artificial Intelligence 72 Iterative Deepening Arad Sibiu TimisoaraZerind Oradea Arad l = 2 steps 1, 2, and 3 l = 2 steps 1, 2, and 3
74
Foundations of Artificial Intelligence 73 Iterative Deepening Arad Sibiu TimisoaraZerind OradeaFagarasArad Rimnicu Vilcea OradeaArad LugoiArad l = 2 step 5 l = 2 step 5
75
Foundations of Artificial Intelligence 74 Iterative Deepening Space complexity: if the shallowest solution is at depth g, then the depth-first search to this depth will succeed (so Iterative Deepening will always return the shallowest solution). Since each of the individual searches are performed depth-first, the amount of memory required is same as depth-first search. Nodes examined: (assume tree has depth d with a single goal node at that depth) No. of nodes examined in the final (successful) iteration (same as DFS): No. of nodes examined in the final (successful) iteration (same as DFS): For each of depths j = 1, 2, …, d-1, must examine the entire tree: Total nodes examined in failing searches: Total nodes examined in failing searches: (1) (2)
76
Foundations of Artificial Intelligence 75 Properties of Iterative Deepening Complete? Yes Time Complexity? Adding (1) and (2) from before gives: Space Complexity? O ( bd ) Optimal? Yes (if cost = 1 per step) Can be modified to explore uniform-cost search = O (b d )
77
Foundations of Artificial Intelligence 76 Uniform-Cost Search Always expand the least-cost unexpanded node Queue = insert in order of increasing path cost Arad Sibiu Timisoara Zerind 75 140 118 <== Zerind, Timisoara, Sibiu <==
78
Foundations of Artificial Intelligence 77 Uniform-Cost Search Arad Sibiu Timisoara Zerind 75 140 118 Oradea Arad 75+75 71+75 <== Timisoara, Sibiu, Oradea, Arad <==
79
Foundations of Artificial Intelligence 78 Uniform-Cost Search Arad Sibiu Timisoara Zerind 75 140 118 Oradea Arad 75+75 71+75 Lugoi Arad 118+118 111+118 <== Sibiu, Oradea, Arad, Lugoi, Arad <==
80
Foundations of Artificial Intelligence 79 Uniform-Cost Search Arad Sibiu Timisoara Zerind 75 140 118 Oradea Arad 75+75 71+75 Lugoi Arad 118+118 111+118 <== Sibiu, Oradea, Arad, Lugoi, Arad <==
81
Foundations of Artificial Intelligence 80 Uniform Cost Search For the rest of the example, let us assume repeated state checking: If a newly generated state was previously expanded, then discard the new state If multiple (unexpanded) instances of a state end up on the queue, we only keep the instance that has the least path cost from the start node and eliminate the other instances.
82
Foundations of Artificial Intelligence 81 Uniform-Cost Search Arad Sibiu Timisoara Zerind 75 140 118 Oradea 71+75 Lugoi 111+118 <== Sibiu, Oradea, Lugoi <==
83
Foundations of Artificial Intelligence 82 Uniform-Cost Search Arad Sibiu Timisoara Zerind 75140118 Oradea 146 Lugoi 229 <== Oradea, Rimnicu, Lugoi, Fagaras <== FagarasRimnicu 239 220
84
Foundations of Artificial Intelligence 83 Uniform-Cost Search Arad Sibiu Timisoara Zerind 75140118 Oradea 146 Lugoi 229 <== Rimnicu, Lugoi, Fagaras <== FagarasRimnicu 239 220 Note: Oradea only leads to repeated states.
85
Foundations of Artificial Intelligence 84 Uniform-Cost Search
86
Foundations of Artificial Intelligence 85 Uniform-Cost Search Arad Sibiu Timisoara Zerind 75140118 Oradea 146 Lugoi 229 <== Lugoi, Fagaras, Pitesti, Craiova <== Fagaras Rimnicu 239220 Craiova Pitesti 367 317
87
Foundations of Artificial Intelligence 86 Uniform-Cost Search Arad Sibiu Timisoara Zerind 75140118 Oradea 146 Lugoi 229 <== Fagaras, Mehadia, Pitesti, Craiova <== Fagaras Rimnicu 239220 Craiova Pitesti 367 317 Mehadia 299
88
Foundations of Artificial Intelligence 87 Uniform-Cost Search Arad Sibiu Timisoara Zerind 75 140118 Oradea 146 Lugoi 229 Fagaras Rimnicu 239220 CraiovaPitesti 367317 Mehadia 299 Bucharest 450 <== Mehadia, Pitesti, Craiova, Bucharest <==
89
Foundations of Artificial Intelligence 88 Lugoi Oradea Sibiu Arad Timisoara Zerind Uniform-Cost Search 75 140 118 146 229 Fagaras Rimnicu 239 220 Craiova Pitesti 367317 Mehadia 299 Bucharest 450 <== Pitesti, Craiova, Dobreta, Bucharest <== Dobreta 374
90
Foundations of Artificial Intelligence 89 Lugoi Oradea Sibiu Arad Timisoara Zerind Uniform-Cost Search 75 140 118 146 229 Fagaras Rimnicu 239 220 Craiova Pitesti 367317 Mehadia 299 Bucharest 450 <== Craiova, Dobreta, Bucharest <== Dobreta 374 Bucharest Craiova 418 455
91
Foundations of Artificial Intelligence 90 Lugoi Oradea Sibiu Arad Timisoara Zerind Uniform-Cost Search 75 140 118 146 229 Fagaras Rimnicu 239 220 Craiova Pitesti 367317 Mehadia 299 Bucharest 450 <== Craiova, Dobreta, Bucharest <== Dobreta 374 Bucharest Craiova 418 455
92
Foundations of Artificial Intelligence 91 Lugoi Oradea Sibiu Arad Timisoara Zerind Uniform-Cost Search 75 140 118 146 229 Fagaras Rimnicu 239 220 Craiova Pitesti 367317 Mehadia 299 <== Craiova, Dobreta, Bucharest <== Dobreta 374 Bucharest 418 Goes to repeated states with higher path costs than previous visits to those states
93
Foundations of Artificial Intelligence 92 Uniform-Cost Search
94
Foundations of Artificial Intelligence 93 Lugoi Oradea Sibiu Arad Timisoara Zerind Uniform-Cost Search 75 140 118 146 229 Fagaras Rimnicu 239 220 Craiova Pitesti 367317 Mehadia 299 <== Bucharest <== Dobreta 374 Bucharest 418
95
Foundations of Artificial Intelligence 94 Lugoi Oradea Sibiu Arad Timisoara Zerind Uniform-Cost Search 75 140 118 146 229 Fagaras Rimnicu 239 220 Craiova Pitesti 367317 Mehadia 299 <== Urziceni, Giurgiu <== Dobreta 374 Bucharest 418 Pitesti 519 Fagaras 629 Giurgiu 508 Urziceni 503
96
Foundations of Artificial Intelligence 95 Lugoi Oradea Sibiu Arad Timisoara Zerind Uniform-Cost Search 75 140 118 146 229 Fagaras Rimnicu 239 220 Craiova Pitesti 367317 Mehadia 299 Dobreta 374 Bucharest 418 Solution Path: Arad Sibiu Rimnicu Pitesti Bucharest Total cost: 418 Compare this to: Arad Sibiu Fagaras Bucharest with total cost of 450
97
Foundations of Artificial Intelligence 96 Properties of Uniform-Cost Search Complete? Yes, if b is finite (similar to Breadth-First search) Time Complexity? Number of nodes with g(n) cost of optimal solution Space Complexity? Number of nodes with g(n) cost of optimal solution Optimal? Yes, if the path cost never decreases along any path i.e., if g(Successor(n)) g(n), for all nodes n What happens if we had operators with negative costs?
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.