Download presentation
Presentation is loading. Please wait.
1
Solving problems by searching
Chapter 3
2
Outline Problem-solving agents Problem types Problem formulation
Example problems Basic search algorithms
3
Which type of agent will be used and why?
goal-based agent called a problem-solving agent
4
Problem-solving agents
atomic representations : no internal states. structured representations are usually called planning agen Problem-solving begins with precise definitions of problems and their solutions general-purpose search algorithms that can be used to solve these problems: uninformed search algorithms that are given no information about the problem other than its definition Informed search algorithms, on the other hand, can do quite well given some guidance on where to look for solutions
5
An agent with several immediate options of unknown value can decide what to do by first examining future actions that eventually lead to states of known value. So we have to be more specific about properties of the environment environment is observable, so the agent always knows the current state. environment is discrete. so at any given state there are only finitely many actions to choose from. environment is known, so the agent knows which states are reached by each action. environment is deterministic, so each action has exactly one outcome.
6
The process of looking for a sequence of actions that reaches the goal is called search.
A search algorithm takes a problem as input and returns a solution in the form of an action sequence. So, search is the process of looking for a sequence of actions that reaches the goal. Once a solution is found, the actions it recommends can be carried out this is called the execution phase it ignores its percepts when choosing an action because it knows in advance what they will be, this called open-loop system.
7
Problem-solving agents
8
Example: Romania On holiday in Romania; currently in Arad.
Flight leaves tomorrow from Bucharest Formulate goal: be in Bucharest Formulate problem: states: various cities actions: drive between cities Find solution: sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
9
Example: Romania
10
Goal formulation, based on the current situation and the agent's performance measure, is the first step in problem solving. Problem formulation is the process of deciding what actions and states to consider, given a goal.
11
A problem can be defined formally by five components:
The initial state: starts state of agent e.g.: In(Arad). A description of the possible actions available to the agent given a particular state s, ACTIONS (s) returns the set of actions that can be executed in s. For example, from the state In(Arad), the applicable actions are: { Go(Sibiu), Go(Timisoara), Go(Zerim1)}. A description of what each action does; the formal name for this is the transition model. We also use the term successor to refer to any state reachable from a given state by a single action ; (s,a). Result(In(Arad), Go(Zeriad)) = In(Zeririd) . 1+2+3= State space
12
Cont. The goal test, which determines whether a given state is a goal state. Sometimes the goal is specified by an abstract property rather than an explicitly enumerated set of states. For example, in chess, the goal is to reach a state called "checkmate," A path cost function that assigns a numeric cost to each path. The problem-solving agent chooses a cost function that reflects its own performance measure. The step cost of taking action a in state s to reach state s' is denoted by c (s, a, s'). Optimal solution has the lowest path cost among all solutions Note : We assume that step costs are nonnegative.
13
Problem types Deterministic, fully observable single-state problem
Agent knows exactly which state it will be in; solution is a sequence Non-observable sensorless problem (conformant problem) Agent may have no idea where it is; solution is a sequence Nondeterministic and/or partially observable contingency problem percepts provide new information about current state often interleave} search, execution Unknown state space exploration problem
14
EXAMPLE PROBLEMS
15
Example Problems Toy problems (but sometimes useful)
Illustrate or exercise various problem-solving methods Concise, exact description Can be used to compare performance Examples: 8-puzzle, 8-queens problem, Cryptarithmetic, Vacuum world, Missionaries and cannibals, simple route finding Real-world problem More difficult No single, agreed-upon description Examples: Route finding, Touring and traveling salesperson problems, VLSI layout, Robot navigation, Assembly sequencing Toy problem おもちゃ問題 おもちゃもんだい Illustrate 説明する せつめいする Exercise 演習 えんしゅう Concise 簡潔な かんけつな Agreed 合意 ごうい Description 記述 きじゅつ
16
Toy Problems: The vacuum world
The world has only two locations Each location may or may not contain dirt The agent may be in one location or the other 8 possible world states Three possible actions: Left, Right, Suck Goal: clean up all the dirt 1 2 3 4 5 6 Problem type 問題のタイプ もんだいのタイプ Vacuum cleaner 掃除機 そうじき Location 場所 ばしょ Suck 吸込み すいこみ 7 8
17
Toy Problems:The vacuum world
States: one of the 8 states given earlier Actions: move left, move right, suck Goal test: no dirt left in any square Path cost: each action costs one S R L
18
Example: vacuum world Single-state, start in #5. Solution?
19
Example: vacuum world Single-state, start in #5. Solution? [Right, Suck] Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution?
20
Example: vacuum world Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution? [Right,Suck,Left,Suck] Contingency Nondeterministic: Suck may dirty a clean carpet Partially observable: location, dirt at current location. Percept: [L, Clean], i.e., start in #5 or #7 Solution?
21
Example: vacuum world Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution? [Right,Suck,Left,Suck] Contingency Nondeterministic: Suck may dirty a clean carpet Partially observable: location, dirt at current location. Percept: [L, Clean], i.e., start in #5 or #7 Solution? [Right, if dirt then Suck]
22
Single-state problem formulation
A problem is defined by four items: initial state e.g., "at Arad" actions or successor function S(x) = set of action–state pairs e.g., S(Arad) = {<Arad Zerind, Zerind>, … goal test, can be explicit, e.g., x = "at Bucharest" implicit, e.g., Checkmate(x) path cost (additive) e.g., sum of distances, number of actions executed, etc. c(x,a,y) is the step cost, assumed to be ≥ 0 A solution is a sequence of actions leading from the initial state to a goal state
23
Selecting a state space
Real world is complex state space must be abstracted for problem solving (Abstract) action = complex combination of real actions e.g., "Arad Zerind" represents a complex set of possible routes, detours, rest stops, etc. For guaranteed realizability, any real state "in Arad“ must get to some real state "in Zerind" (Abstract) solution = set of real paths that are solutions in the real world Each abstract action should be "easier" than the original problem
24
Vacuum world state space graph
actions? goal test? path cost?
25
Vacuum world state space graph
states? Dirt location and robot(agent) location Number of possible states : n*2n actions? Left, Right, Suck goal test? no dirt at all locations path cost? 1 per action
26
Example: The 8-puzzle states? actions? goal test? path cost?
27
Example: The 8-puzzle states? locations of tiles
Number of possible states : 9!/2 actions? move blank left, right, up, down goal test? = goal state (given) path cost? 1 per move [Note: optimal solution of n-Puzzle family is NP-hard]
28
Example: robotic assembly
states?: real-valued coordinates of robot joint angles parts of the object to be assembled actions?: continuous motions of robot joints goal test?: complete assembly path cost?: time to execute
29
States: Any arrangement of 0 to 8 queens on the board is a state.
Initial state: No queens on the board. Actions: Add a queen to any empty square. Transition model: Returns the board with a queen added to the specified square. Goal test: 8 queens are on the board, none attacked. In this formulation, we have 64! possible sequences to investigate. A better formulation would prohibit placing a queen in any square that is already attacked: States: All possible arrangements of n queens (0 < n < 8), one per column in the leftmost n. columns, with no queen attacking another. Actions: Add a queen to any square in the leftmost empty column such that it is not attacked by any other queen.
30
There are two main kinds of formulation.
An incremental formulation involves operators that augment the state description, starting with an empty state; for the 8-queens problem, this means that each action adds a queen to the state. A complete-state formulation starts with all 8 queens on the board and moves them amend.
31
Real world Problems States: Each state obviously includes a location (e.g., an airport) and the current time. And may depend on previous segments, their fare bases, and their status as domestic or international also state must record extra information about these "historical" aspects. Initial state: This is specified by the user's query. Actions: Take any flight from the current location, in any seat class, leaving after the current time, leaving enough time for within-airport transfer if needed.
32
Transition model: The state resulting from taking a flight will have the flight's destination as the current location and the flight's arrival time as the current time. Goal test: Are we at the final destination specified by the user? Path cost: This depends on monetary cost, waiting time, flight time, customs and imigration procedures, seat quality, time of day, type of airplane, frequent-flyer mileage awards, and so on.
33
Tree Search A general TREE-SEARCH algorithm considers all possible paths to find a solution GRAPH-SEARCH algorithm avoids consideration of redundant paths.
34
Tree Search The possible action sequences starting at the initial state form a search tree with : initial state at the root; the branches are actions the nodes correspond to states in the state space of the problem. leaf node, that is, a node with no children in the tree. frontier (Open list) , the set of all leaf nodes available for expansion at any given point
35
Tree search algorithms
Basic idea: offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states)
36
Tree search example
37
Tree search example
38
Tree search example
40
Implementation: general tree search
41
Implementation: states vs. nodes
A state is a (representation of) a physical configuration A node is a data structure constituting part of a search tree includes state, parent node, action, path cost g(x), depth The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states.
42
A state corresponds to a configuration of the world.
A node is a bookkeeping data structure used to represent the search tree. A state corresponds to a configuration of the world. Thus, nodes are on particular paths, as defined by PARENT pointers, whereas states are not. Furthermore, two different nodes can contain the same world state if that state is generated via two different search paths.
43
Infrastructure for search algorithms
Search algorithms require a data structure to keep track of the search tree that is being constructed. For each node n of the tree, we have a structure that contains four components: n. STATE: the state in the state space to which the node corresponds; n.PARENT: the node in the search tree that generated this node; n.Action: the action that was applied to the parent to generate the node; n.PATH-COST: the cost, traditionally denoted by y(n), of the path from the initial state to the node, as indicated by the parent pointers.
44
Search strategies A search strategy is defined by picking the order of node expansion Strategies are evaluated along the following dimensions: completeness: does it always find a solution if one exists? time complexity: How long does it take to find a solution?( number of nodes generated) space complexity: How much memory is needed to perform the search? (maximum number of nodes in memory) optimality: does it always find a least-cost solution? Time and space complexity in Graph are measured in terms of b: maximum branching factor of the search tree, or maximum number of successors of any node d: depth of the least-cost solution (the number of steps along the path from the root ) m: maximum depth of the state space (may be ∞)
45
To assess the effectiveness of a search algorithm, we can consider :
search cost— which typically depends on the time complexity but can also include a term for memory usage— or we can use the total cost, which combines the search cost and the path cost of the solution found. E.g.: In Arad to Bucharest, the search cost is the amount of time taken by the search and the solution cost is the total length of the path in kilometers. => milliseconds + kilometers Note: convert Km into seconds.
46
Uninformed search strategies
uninformed search also called blind search The strategies have no additional information about states beyond that provided in the problem definition. they generate successors and distinguish a goal state from a non-goal state.
47
Uninformed search strategies
Uninformed search strategies use only the information available in the problem definition Breadth-first search Uniform-cost search Depth-first search Depth-limited search Iterative deepening search
48
datatype node components: STATE, PARENT-NODE, OPERATOR, DEPTH, PATH-COST
49
Breadth-first search Expand shallowest unexpanded node
All edges have same cost (weight) the goal test is applied to each node when it is generated, rather than when it is selected for expansion discards any new path to a state already in the frontier or explored set Frontier is a FIFO queue, i.e., new successors go at end Implementation:
50
Breadth-first search Expand shallowest unexpanded node Implementation:
Frontier is a FIFO queue, i.e., new successors go at end
51
Breadth-first search Expand shallowest unexpanded node Implementation:
Frontier is a FIFO queue, i.e., new successors go at end
52
Breadth-first search Expand shallowest unexpanded node Implementation:
Frontier is a FIFO queue, i.e., new successors go at end
53
Close List Frontier null A B,C A,B D,E,C A,B,D E,C A,B,D,E C A,B,D,E,C F,G ,fA,B,D,E,C G
54
1- Put the ending node (the root node) in the queue.
2- Pull a node from the beginning of the queue and examine it. If the searched element is found in this node, quit the search and return a result. Otherwise push all the (so-far-unexamined) successors (the direct child nodes) of this node into the end of the queue, if there are any. 3- If the queue is empty, every node on the graph has been examined -- quit the search and return "not found". 4- Repeat from Step 2.
56
Properties of breadth-first search
Complete? Yes (if b is finite) Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1) Space? O(bd+1) (keeps every node in memory) Optimal? Yes (if cost = 1 per step) Space is the bigger problem (more than time) Worst case space complexity O( | V | + | E | ) = O(bd)
57
Lesson Learned from BFS
Memory requirements are bigger problem than is the execution time. Time Requirement is the major factor So exponential-Complexity search problem can’t be solved by Uninformed method for any but for smallest instances.
58
uniform-cost search (UCS)
expands the node n with the lowest path cost g(n). Not care about the number of steps a path has, but only about their total cost. UCS may suffer from infinite loop if it ever expands a node has 0 cost action that leading to back to the same state, (e.g.: no operation). This algorithm is also known as Dijkstra’s single-source shortest algorithm. The search begins at the root node. The search continues by visiting the next node which has the least total cost from the root. Nodes are visited in this manner until a goal state is reached.
59
Uniform cost search modifies the breadth-first strategy .
USC expanding the lowest-cost node on the fringe (as measured by the path cost g(n)), rather than the lowest-depth node.
60
goal test is applied to a node when it is selected for expansion rather than when it is first generated because : The reason is that the first goal node that is generated may be on a suboptimal path. test is added in case a better path is found to a node currently on the frontier.
61
Uniform-cost search Expand least-cost unexpanded node Implementation:
fringe = queue ordered by path cost Equivalent to breadth-first if step costs all equal ;(ε is small positive constant) Optimal? Yes – nodes expanded in increasing order of g(n) Time? O(b 1+(C*/ ε)) where C* is the cost of the optimal solution Space? O(b1+(C*/ ε)) >bd
62
Advantage: Guaranteed to find the least-cost solution. Disadvantages: Exponential storage required. open list must be kept sorted (as a priority queue). Must change cost of a node on open if a lower-cost path to it is found.
65
Close List Frontier null A D, B A,D B,E,F A,D,B E,F,C A,D,B, E F,C A,D,B, E, F C,G A,D,B, E, F, C G Is Goal Expand G
66
Depth-first search Expand deepest unexpanded node Implementation:
frontier = LIFO queue, i.e., put successors at front
67
Algorithmic Steps Step 1: Push the root node in the Stack. Step 2: Loop until stack is empty. Step 3: Peek the node of the stack. Step 4: If the node has unvisited child nodes, get the unvisited child node, mark it as traversed and push it on stack. Step 5: If the node does not have any unvisited child nodes, pop the node from the stack.
68
Depth-first search Expand deepest unexpanded node Implementation:
frontier = LIFO queue, i.e., put successors at front
69
Depth-first search Expand deepest unexpanded node Implementation:
frontier = LIFO queue, i.e., put successors at front
70
Depth-first search Expand deepest unexpanded node Implementation:
frontier = LIFO queue, i.e., put successors at front
71
Depth-first search Expand deepest unexpanded node Implementation:
frontier = LIFO queue, i.e., put successors at front
72
Depth-first search Expand deepest unexpanded node Implementation:
frontier = LIFO queue, i.e., put successors at front
73
Depth-first search Expand deepest unexpanded node Implementation:
frontier = LIFO queue, i.e., put successors at front
74
Depth-first search Expand deepest unexpanded node Implementation:
frontier = LIFO queue, i.e., put successors at front
75
Depth-first search Expand deepest unexpanded node Implementation:
frontier = LIFO queue, i.e., put successors at front
76
Depth-first search Expand deepest unexpanded node Implementation:
frontier = LIFO queue, i.e., put successors at front
77
Depth-first search Expand deepest unexpanded node Implementation:
frontier = LIFO queue, i.e., put successors at front
78
Depth-first search Expand deepest unexpanded node Implementation:
frontier = LIFO queue, i.e., put successors at front
79
Properties of depth-first search
Complete? No: fails in infinite-depth spaces, spaces with loops Modify to avoid repeated states along path complete in finite spaces Time? O(bm): terrible if m is much larger than d but if solutions are dense, may be much faster than breadth-first Space? O(bm), i.e., linear space! (only a single path from the root to a leaf node) Optimal? No
80
DFS Drawbacks can get stuck going down the wrong path
depth-first search will either get stuck in an infinite loop and never return a solution, or it may eventually find a solution path that is longer than the optimal solution. That means depth-first search is neither complete nor optimal.
81
Depth-limited search voids the pitfalls of depth-first search by imposing a cutoff on the maximum depth of a path. This cutoff can be implemented by using operators that keep track of the depth. e.g.: "If you are in city A and have travelled a path of less than 19 steps, then generate a new state in city B with a path length that is one greater." With this new operator set, we are guaranteed that depth-limited search is complete but not optimal. = depth-first search with depth limit L, It takes O(bL) time and O(bL) space, predetermined depth limit L Notice that depth-limited search can terminate with two kinds of failure: the standard failure value indicates no solution; the cutoff value indicates no solution within the depth limit.
82
Iterative deepening search
The hard part about depth-limited search is picking a good limit. Strategy that focus on choosing the best depth limit by trying all possible depth limits: d=0, then d=1, then d=2, and so on. combines the benefits of DFS and BFS It is optimal and complete, (-ve) some states are expanded multiple times.
83
Iterative deepening search
IDDFS(root, goal) { depth = 0 while(no solution) { solution = DLS(root, goal, depth) depth = depth + 1 } return solution DLS(node, goal, depth) if ( depth >= 0 ) { if ( node == goal ) return node for each child in expand(node) DLS(child, goal, depth-1)
84
Iterative deepening search l =0
85
Iterative deepening search l =1
86
Iterative deepening search l =2
87
Iterative deepening search l =3
89
Iterative deepening search
Number of nodes generated in a depth-limited search to depth d with branching factor b: NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd Number of nodes generated in an iterative deepening search to depth d with branching factor b: NIDS = (d+1)b0 + d b^1 + (d-1)b^2 + … + 3bd-2 +2bd-1 + 1bd For b = 10, d = 5, NDLS = , , ,000 = 111,111 NIDS = , , ,000 = 123,456 Overhead = (123, ,111)/111,111 = 11%
90
Properties of iterative deepening search
Complete? Yes Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd) Space? O(bd) Optimal? Yes, if step cost = 1
91
Summary of algorithms
92
Bidirectional search search both forward from the initial state j and backward from the goal, and stop when the two searches meet in the middle. solution will be found in O(2bd/2) = O(bd/2).
93
There are three ways to deal with repeated states, in increasing order of effectiveness and computational overhead: Do not return to the state you just came from. Have the expand function (or the operator set) refuse to generate any successor that is the same state as the node's parent. Do not create paths with cycles in them. Have the expand function (or the operator set) refuse to generate any successor of a node that is the same as any of the node's ancestors. Do not generate any state that was ever generated before. This requires every state that is generated to be kept in memory, resulting in a space complexity of O(bd), potentially. It is better to think of this as O(s), where s is the number of states in the entire state space.
94
So -make use of a hash table that stores all the nodes that are generated. This makes checking for repeated states reasonably efficient. The trade-off between the cost of storing and checking and the cost of extra search depends on the problem: the "loopier" the state space, the more likely it is that checking will pay off.
95
Repeated states Failure to detect repeated states can turn a linear problem into an exponential one!
96
Graph search
97
Summary Problem formulation usually requires abstracting away real-world details to define a state space that can feasibly be explored Variety of uninformed search strategies Iterative deepening search uses only linear space and not much more time than other uninformed algorithms
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.