Download presentation
1
SOLVING PROBLEMS BY SEARCHING
by Gülce HANER
2
Outline Problem-solving agents Example problems
(Toy problems & Real world problems) Searching for solutions Uninformed search strategies Avoiding repeated states Searching with partial information
3
3.1 PROBLEM SOLVING AGENTS
One kind of a goal-based agent. (Goal based agent: considers future actions and the desirability of their outcomes.) Decides what to do by finding sequences of actions that lead to desirable states. Intelligent agents are supposed to maximize their performance measures. Achieving this is sometimes simplified if the agent can adopt a goal and aim at satisfying it. Why and how an agent might do this?
4
Example: Suppose we have an agent in the City of Arad, Romania enjoying a touring holiday. With performance measures containing many factors: -Improve suntan -Improve Romanian -Take the sights .... The decision problem is a complex one with many tradeoffs.
5
Example cont. Now suppose the agent has a nonrefundable ticket to fly out of Bucharest the following day. Than it makes sense for the agent to adopt the goal of getting to Bucharest. By adopting this goal, - The agent rejects the actions that don’t reach Bucharest on time. - Thus, the agent’s decision problem is greatly simplified.
6
Road map of Romania
7
Goal formulation Goals help organize behavior by limiting the objectives that the agent is trying to achieve. The agent’s task is to find out which sequence of actions will get it to a goal state Goal formulation , based on the current situation and the agent’s performance measure, is the first step in problem solving.
8
Problem formulation Problem formulation: the process of deciding what actions and states to consider , given a goal. in the example -actions: driving from one major town to another -states: being in a particular town
9
“Formulate,search,execute”
The process of looking for a sequence that leads to a goal state is called search. A search algorithm takes a problem as input and returns a solution in the form of an action sequence. Once a solution is found, the actions it recommends can be carried out. This is called the execution phase. Thus, we have a simple “formulate, search, execute” design for the agent.
11
Assumptions for environment
Assumptions for the environment of agent design in figure 3.1 are: Static : without paying attention to any changes that might be occuring in the environment. Observable Discrete : “alternative courses of action” can be enumerated Deterministic : No unexpected events All these asumptions mean that we are dealing with the easiest kinds of environments.
12
Well defined problems and solutions
Goal formulation is the first step in problem solving. Then we need the problem formulation which is defined by four components: -Initial state -Successor function (or operator) -Goal test -Path cost
13
Components of a problem formulation
1-Initial state The state that the agent starts in. -ex: in(Arad) 2-Successor function -Given a particular state x SUCCESSOR-FN(x) returns a set of <action,successor> ordered pairs -Ex: From state in(Arad) successor function returns: {<Go(Sibiu),In(Sibiu)>,<Go(Zerind),In(Zerind)>, Go(Timisoara),In(Timisoara)>}
14
Components of a problem formulation
3-Goal test -Determines whether a given state x is a goal state. “x=in(Bucharest)” Goal states can be either -explicit “in(Bucharest)” -abstract property “checkmate” in chess 4-Path cost -A function that assigns a numeric cost to each path. -Sum of step costs c(x, a, y) > 0
15
State space, path Together, the initial state and successor function implicitly define the state space of the problem -the set of all states reachable from the initial state. The state space forms a graph in which the nodes are states and the arcs between nodes are actions. A path in the state space is a sequence of states connected by sequences of actions.
16
State space graph
17
Solution quality, optimal solution
A solution to a problem is a path from the initial state to a goal state. Solution quality is measured by the path cost function and an optimal solution has the lowest path cost among all solutions.
18
Formulating problems We previously proposed a formulation of the problem getting to Bucharest in terms of initial state, successor function, goal test and path cost. This formulation seems reasonable, yet it ommits a great many aspects of the real world. A state of real world includes more detail, compared to the simple state description “in(Bucharest)”. (details like, traveling companions, condition of the road, what is on the radio, the weather etc.)
19
Abstraction Some considerations are left out of our state descriptions, because they are irrelevant to the problem of finding a route to Bucharest. The process of removing detail from a representation is called abstraction. Abstraction can be used for -state descriptions -actions
20
Choice of a good abstraction
Abstraction is valid if we can expand any abstract solution into a solution in the more detailed world. Abstraction is useful if carrying out each of the actions in the solution is easier than the original problem. For a good abstraction: -remove as much detail as possible -retaining validity - ensure abstract actions are easy to carry out
21
Outline Problem-solving agents Example problems
(Toy problems & Real world problems) Searching for solutions Uninformed search strategies Avoiding repeated states Searching with partial information
22
3.2 EXAMPLE PROBLEMS Toy problem
-Intended to illustrate or exercise various problem- solving methods. -Concise, exact description -Can be used to compare performance of algorithms Real-world problem -Whose solutions people actually care about -They tend not to have a single agreed upon description
23
Some toy problems Vacuum world
-States: location of the agent, containing dirt or not. possible world states. -Initial state: any state -Successor function: legal states that result from three actions (Left,Right and Suck) -Goal state: checks whether all the squares are clean. -Path cost: # of steps in the path (each step costs 1)
24
Some toy problems State space for the vacuum world
25
8-Puzzle Consist of a 3x3 board with 8 numbered tiles and a blank space. Tile adjacent to the blank space can slide into the space Object: reach a specified goal state An instance
26
Formulation of 8-puzzle
State space: location of each of the eight tiles and the blank Initial state: Any state Successor function: states resulting from trying the four actions(blank moves left, right, up or down) Path cost: number of steps in the path 8-puzzle belongs to the family of sliding block puzzles This general class is known to be NP-complete 9!/2=181,440 reachable states and easily solved
27
8-queens problem Place 8-queens on a chessboard such that no queen attacks any other. Two main kinds of formulation 1- Complete-state formulation Starts with all 8 queens on the board and moves them around.
28
8-queens problem 2-Incremental formulation: -States: Any arrangement of 0 to 8 queens on the board -Initial state: no queens on the board -Successor function: Add a queen to any empty square -Goal test: 8 queens are on the board, none attacked. ( possible states to investigate.)
29
8-queens problem Prohibit placing a queen in any square that is already attacked(a better formulation) -States: Arrangements of n queens ( ), one per column in the leftmost n columns, with no queen attacking another -Successor function: add a queen to any square in the leftmost empty column such that it is not attacked by any other queen (Reduces state space from to just 2,057 )
30
Real world problems Some of the real world problems are,
-Route finding problem -Touring problems -Travelling salesperson problem -VLSI layout problem -Robot navigation -Automatic assembly sequencing -Internet searching
31
Route finding problem Used in a variety of applications
-routing in computer networks, military operations planning, and airline travel planning systems etc. Typically complex to specify
32
A simplified example of an airline travel problem:
-States: Location (e.g., an airport) and the current time -Initial state: Specified by the problem -Successor function: States resulting from taking any scheduled flight (seat class and location), leaving later than the current time. -Goal test: Are we at the destination by some prespecified time? -Path cost: Depends on monetary cost , waiting time, flight time, seat quality and so on. -many additional complications to handle byzantine structure that airlines impose -Contingency plans should also be included
33
Touring problems Closely related to route-finding problems
Ex: “Visit every city at least once, starting and ending at Bucharest” Important differences , States: “current location, set of cities the agent has visited” -Ex: in Vaslui; visited{Bucharest, Urziceni, Vaslui} Initial state: “In Bucharest; visited(Bucharest)” Goal test: -Is the agent in Bucharest? -Visited all cities?
34
Traveling salesperson problem (TSP)
A touring problem in which -Each city must be visited exactly once Aim: Find the shortest tour Knowns to be NP-hard
35
Outline Problem-solving agents Example problems
(Toy problems & Real world problems) Searching for solutions Uninformed search strategies Avoiding repeated states Searching with partial information
36
3.3 SEARCHING FOR SOLUTIONS
Having formulated some problems, we now need to solve them. This is done by a search through the state space. This chapter deals with some search techniques that use an explicit search tree.
37
Problem: find a route from Arad to Bucharest.
-Expanding : Applying successor function to the current state, thereby generating a new set of states. -The choice of which state to expand is determined by the search strategy. -When a goal state found, the solution is sequence of actions from goal node back to the root. Partial search trees -Frontier: a queue that stores the set of nodes which are generated but not yet expanded
38
Representation of nodes
A node is a data structure with five components -State: the state in the state space that the node corresponds -Parent-node: the node in the search tree that generated this node -Action: the action that was applied to the parent to generate the node -Path-cost: cost of the path from initial state to the node -Depth: The number of steps along the path from the root. Node: bookkeeping data structure to represent a search tree -State: corresponds to the configuration of the world
39
Measuring problem-solving performance
Evaluation of an algorithm’s performance Completeness: Is the alg. guaranteed to find a solution when there is one? Optimality:Does the strategy finds the optimal solution Time complexity: How long does it take to find a solution? Space complexity:How much memory is needed to perform the search?
40
Measuring problem-solving performance
Time and space complexity are always considered with respect to some measure of the problem difficulty. In AI, complexity is expressed in terms of three quantities: - b, the branching factor, max number of successors at any node - d, the depth of the shallowest goal node - m, the maximum length of any path in the state space Time is often measured in terms of number of nodes generated during search. Space is meausured in terms of the maximum number of nodes stored in memory.
41
Assessing the efectiveness of an alg.
Search cost depends on the time complexity can also include a term for memory usage Total cost - combines the search cost and the path cost of the solution found
42
Outline Problem-solving agents Example problems
(Toy problems & Real world problems) Searching for solutions Uninformed search strategies Avoiding repeated states Searching with partial information
43
3.4 UNINFORMED SEARCH STRATEGIES
Also called blind search No additional information about states beyond that provided in the problem definition All they can do is generate successors and distinguish a goal state from a nongoal state. Strategies that know whether a nongoal state is “more promising” than another are called informed search or heuristic search strategies.
44
UNINFORMED SEARCH STRATEGIES
1-Breadth-first search The root node is expanded first, then all the successors of the root node are expanded next, then their successors, and so on. Can be implemented by TREE-SEARCH with an empty frontier which is a first-in-first-out (FIFO) queue FIFO queue puts all the newly generated successors at the end of the queue, which means that shallow nodes are expanded before deeper nodes.
45
Breadth-first search Complete (provided the branching factor b is finite) The shallowest goal node is not necessarily the optimal one. (optimal if all actions have the same cost) Time complexity : -Suppose every state has b successors and the solution is at depth d -The total number of nodes generated in the worst case is : Space complexity : - Every node that is generated must remain in memory. -the same as time complexity:
46
Breadth-first search In general, exponential-complexity search problems cannot be solved by uninformed search methods for any but the smallest instances.
47
2-Uniform-cost search Expands the node n with the lowest path cost.
Guided by path costs rather than depths Let be the cost of optimal solution, and assume every action costs at least . Time and space complexity in the worst case is which can be much larger than
48
Uniform cost search Guarantees completeness if every
( : a small positive constant) -Otherwise (if some step cost=0) it can get stuck in an infinite loop condition is also sufficient to ensure optimality.
49
3-Depth-first search The search proceeds immediately to the deepest level of the search tree, where the nodes have no successors then “backes up” to the next shallowest node that still has unexplored nodes. Always expands the deepest node in the current frontier of the search tree. Can be implemented by TREE-SEARCH with an empty frontier which is a last-in-first-out queue (stack)
50
Depth-first search Space complexity:
-Requires storage of only bm+1 nodes b: branching factor , m:maximum depth Would require 118 kilobytes instead of 10 petabytes at depth d=12 (a factor of 10 billion times less space) Not always optimal (Ex. Goal nodes: J and C) Not complete : would never terminate if firstly choosen subtree were unbounded depth but contained no solution. Time complexity: -m can be much larger than d
51
Depth-first search A drawback : can make a wrong choice and get stuck going down a very long(or even infinite) path, when a different choice would lead to a solution near the root. Backtracking search -a variant of depth first search -only one successor generated at a time -each partially expanded node remembers which successor to generate next. -Space complexity : O(m)
52
4-Depth-limited search
Predetermined depth limit Nodes at depth l are treated as if they have no successors. Incomplete if we choose -the shallowest goal is beyond the depth limit Nonoptimal if we choose Time complexity Space complexity
53
Depth-limited search Depth first search can be viewed as a special case of depth- limited search with No solution No solution within the depth limit
54
5- Iterative deepening depth-first search
Often used in combination with depth-first search, that finds the best depth limit. Gradually increases the limit- first 0, then 1, then 2, .., until a goal is found. Combines the benefits of depth-first and breath-first search. -Space complexity : O(bd) -Total number of nodes generated: -Thus time complexity is
55
Iterative deepening depth-first search
Four iterations of iterative deepening search on a binary tree: First three iteration:
56
Iterative deepening depth-first search
Fourth iteration:
57
large search space + the depth of solution unknown
For b=10, d=5 , large search space + the depth of solution unknown -then, iterative deepening is the preferred uninformed search method in general. +999,990=1,111,100
58
6- Bidirectional search
Runs two simultaneous searches - One forward from the initial state - The other backward from the goal - Stops when the two searches meet in the middle
59
Bidirectional search Motivation: + much less than
-reduction in time complexity makes this search attractive has severe limitations -while searching backwards predecessors of a state x must be generated, which is not always possible -one of the search trees must be kept in memory (space complexity : )
60
Bidirectional search Complete and optimal(for uniform step costs)
if both searches are breath-first search just one goal (8-puzzle) :backward search is very much like forward search If goal test is implicit (“checkmate” in chess) -no efficient way for searching backward from the goal.
61
Comparing uninformed search strategies
62
Outline Problem-solving agents Example problems
(Toy problems & Real world problems) Searching for solutions Uninformed search strategies Avoiding repeated states Searching with partial information
63
3.5 AVOIDING REPEATED STATES
One of the most important complications to the search process: the possibility of wasting time by expanding states that have already been encountered and expanded before. For some problems this possibility never comes up if the state space is a tree and there is only one path to each state. -Ex: 8 queens problem with the efficient formulation.
64
Avoiding repeated states
For some problems, repeated states are unavoidable -All problems where the actions are reversible (route finding prob., sliding blocks puzzle) Search trees for these problems are infinite -if we prune some of the repeated states, we can cut the search tree down to finite size Eliminating repeated states generally yields an exponential reduction in search cost.
65
GRAPH-SEARCH ALGORITHM
Repeated states can cause a solvable problem to become unsolvable if the algorithm does not detect them. GRAPH-SEARCH ALGORITHM By modifying the general TREE-SEARCH algorithm to include a data structure called the “closed list” we can detect if there is a repeat. Closed list stores every expanded node If the current node matches a node on the closed list -it is discarded instead of being expanded
66
Graph-search On problems with many repeated states, GRAPH- SEARCH is much more efficient than TREE-SEARCH Its worst case time and space requirements are proportional to the size of the state space graph. - much smaller than O( ) Could miss an optimal solution - if a repeated state detected (found two paths to the same state), algorithm always discards the newly discovered path, which might have a smaller cost.
67
Outline Problem-solving agents Example problems
(Toy problems & Real world problems) Searching for solutions Uninformed search strategies Avoiding repeated states Searching with partial information
68
3.6 Searching with partial information
Since now, we assumed that the environment is fully observable and deterministic In such environments, the agent -knows which state it is in, -can calculate exactly which state results from any sequence of actions What happens when knowledge of the states or actions is incomplete?
69
Searching with partial information
Different types of incompleteness lead to three distinct problem types: -Sensorless problems: the agent has no sensors -Contingency problems: environment partially observable or actions are uncertain -Exploration problems: states and actions of the environment are unknown
70
Sensorless problems Suppose vacuum agent
-knows all the affects of its actions -but has no sensors (the environment is not fully observable) Any hope of solving the problem ? Agent knows: -Initial state is one of the set {1,2,3,4,5,6,7,8} -Action [Right] causes it to be in one of the states {2,4,6,8} -Action [Right,Suck] ends up in {4,8} -[Right, Suck, Left, Suck] reach the goal state 7 (no matter what state we started in) Belief state
71
We search in the space of belief states rather than physical states.
Belief state: The agent’s current belief about the possible physical states it might be in.(a set of states) We search in the space of belief states rather than physical states. If the physical state space has S states, belief state space has belief states Reachable portion of the belief state space
72
A solution is a path to a belief state, all of whose members are goal states.
If the environment is also a nondeterministic one.. -actions may have several possible outcomes ex: suck action can sometimes deposits dirt on the carpet, but only if there is no dirt there already (Murphy’s law) -Problem is unsolvable : The agent cannot tell whether the square is dirty and hence cannot tell whether the suck action will clean it up or create more dirt.
73
Contingency problems The agent can obtain new information from its sensors after acting Suppose agent is in the Murphy’s Law world, -has a position sensor and -local dirt sensor (but no sensor capable of detecting dirt in other squares) -Percept [L,Dirty] [Suck,Right, Suck] No fixed action sequence guarantees a solution to this problem.
74
Contingency problems A solution can be found if we don’t insist on a fixed action sequence - [Suck,Right, if [R, dirty] then Suck] is a solution. Algorithms for these problems are more complex than standart search algorithms. Has a different, useful agent design - Rather than considering in advance every possible contingency that might arise during execution, starts acting and see which contingencies do arise.
75
Thank you for your time.. :)
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.