2 TopicsProblem SolvingSearching MethodsGame Playing
3 Introduction Problem solving is mostly based on searching. Every search process can be viewed as a traversal of a directed graph in which each node represents a problem state and each arc represents a relationship between the states represented by the nodes it connects.The search process must find a path through the graph, starting at an initial state and ending in one or more final states. The graph is constructed from the rules that define the allowable moves in the search space. Most search programs represent the graph implicitly in the rules to avoid combinatorial explosion and generate explicitly only those parts that they decide to explore.
4 IntroductionGoal : a description of a desired solution (may be a state (8-puzzle) or a path (traveling salesman)).Search space: set of possible steps leading from initial conditions to a goal.State: a snapshot of the problem at one stage of the solution. The idea is to find a sequence of operators that can be applied to a starting state until a goal state is reached.A state space: the directed graph whose nodes are states and whose arcs are the operators that lead from one state to another.Problem solving is carried out by searching through the space of possible solutions for ones that satisfy a goal.
5 Example: Water jug problem Given two jugs one 4 gallons and the other 3 gallons. The goal is to get 2 gallons in 4 gallon jug.Assumptions:You can fill the jug from the pumpYou can pour water out of the jug onto the groundYou can pour water from one jug to another
6 Example: Water jug problem State space representation
7 Search Five important issues that arise in search techniques are: The direction of the searchThe topology of the search processRepresentation of the nodesSelecting applicable rulesUsing heuristic function to guide the search
8 The Direction of the Search Forward : Data directed search. Start search from the initial state.To reason forward, the left sides (the preconditions) are matched against the current state and the right sides (the results) are used to generate new nodes until the goal is reachedBackward: Goal directed search. Start search from the goal state.To reason backward the right sides are matched against the current node and the left sides are used to generate new nodes representing new goal states to be achieved.
9 The Direction of the Search Factors influencing the choice between forward vs. backward chaining are:relative number of goal states to start states – move from the smaller set of states to the largerbranching factor – move in the direction with the lower branching factorexplanation of reasoning – proceed in the direction that corresponds more closely with the way the user thinks.
10 The Direction of the Search Examples Branching factorIn theorem proving goal state is the theorem to be proved and the initial state is the set of axioms.From small set of axioms large number of theorems can be proved. This large set of theorems must go back to the small set of axioms. Branching factor is greater going forward from axioms to theorems. Backward reasoning is more appropriateIf the branching factor is same in both directions then relative number of start states to goal states determine the direction of search.Bi-directional search, start from both ends and meet somewhere in between. The disadvantage of this technique is search may bypass each other.
11 Explanation of reasoning MYCIN program that diagnoses infectious diseases uses backward reasoning to determine the cause of patient's illness.A doctor may reason as follows:If an organism has a set of properties (lab results) then it is likely that the organism is X.Even though the evidence is most likely documented in the reverse direction(IF (ORGANISM X) (PROPERTIES Y)) CFThe rules justify why certain tests should be performed.
12 The Topology of the Search The Topology of the SearchTrees
14 The topology of the search Check if the generated node already existsIf not, add the nodeIf exists, then do:Set the node that is being expanded to point to the already existing node corresponding to its successor, rather than to the new one. The new one can be thrown away. If looking for the best path, check if the new path is better. If worse do nothing. If better record the new path as the correct path to use to get to the node, and propagate the corresponding change in cost down through successor nodes as necessary.Disadvantage of this topology is that cycles may occur and there is no guarantee for termination
15 Representation of the nodes ArraysOrdered pairsPredicates
16 Representation of the nodes State: location of 8 number tilesOperators: blank moves left, right, left or downGoal test: state matches the configuration on the rightPath cost: each step cost 1, i.e. path length for search tree depth.
17 Representation of the nodes Possible state representations in LISP (0 is the blank)( )((0 2 3) (1 8 4) (7 6 5))((0 1 7) (2 8 6) (3 4 5))The representation depends on: how easy to compare,operate on, and store (size).
19 Operators Functions from state to subset of states drive to neighboring cityplace piece on chess boardadd person to meeting scheduleslide a tile in 8-puzzleMatching Conflict resolution:order (priority)recencyIndexing
20 Using heuristic function to guide the search It is frequently possible to find rules which will increase the chance of success. Such rules are termed heuristics and a search involving them is termed a heuristic search.A heuristic function is a function that maps from problem state description to measure of desirabilityHeuristics for the 8-puzzle problem could be:the number of displaced tilesdistance of displaced tiles
21 Implementing heuristic evaluation functions example: 8-puzzlefails todistinguishStartGoalmore accurate(1) (2) (3)(1) tiles out of place(2) sum of distance out of place(3) 2*number of direct tile reversals
22 Evaluation of Search Strategies Time complexity: how many nodes expanded so far?Space complexity: how many nodes must be stored in node-list at any given time?Completeness: if solution exists, guaranteed to be found?Optimality: guaranteed to find the best solution?
23 Components of Implicit State-Space Graphs There are three basic components to an implicit representation of a state-space graph.A description with which to label the start node. This description is some data structure modeling the initial state of the environment.Functions that transform a state description representing one state of the environment into one that represents the state resulting after an action.These functions are usually called operators. When an operator is applied to a node, it generates one of that node’s successor’s.A goal condition, which can be either a True-False valued function on state descriptions or a list of actual instances of state descriptions that correspond to goal states.
24 Types of Search There are three broad classes of search processes: Uninformed- Blind Search-There is no specific reason to prefer one part of the search space to any other, in finding a path from initial state to goal state.systematic, exhaustive searchdepth-first-searchBreadth-first-search
25 Types of Search2) Informed – Heuristic search - there is specific information to focus the search.Hill climbingBranch and boundBest firstA*3) Game playing – there are at least two partners opposing to each other.Minimax (a, b pruning)Means ends analysis
26 Search Algorithms Task find solution path thro’ problem space keep track of paths from start to goal nodesdefine optimal path if > 1 solution (circumstances)avoid loops (prevent reaching goal)
27 Depth-first searchUses generate and test strategy. Nodes are generated by applying the applicable rules. Then, each generated node is tested if it is the goal. Nodes are generated in a systematic form. It is an exhaustive search of the problem space.Form a one element queue consisting of the root nodeUntil the queue is empty or the goal has been reached, determine if the first element in the queue is the goal node.If the first element is the goal do nothing.If the first element is not the goal node remove the first element from the queue and add the first element's children if any to the front of the queue.If the goal node has been found announce success, otherwise announce failure.
28 Depth-first search - lists: keep track of progress through state space - open states generated but children not examined- closed states already examinedbeginopen := [Start] /initialiseclose := ;while open <>  doremove leftmost state from open, call it X;if X is a goal then return (success)else begingenerate children of X;put X on close;eliminate children of X on open or close; /loop checkput remaining children on the left end of open /queueendend;return (failure) /no states leftend.
29 Depth-first searchNode visit order:Queuing function: enqueue at left
30 Depth-first search Evolution of the closed and open lists  –  [2 3] – [4 5 3] – [1 2 ][ ] – [1 2 4]……………….
31 Depth-first Evaluation Branching factor b, depth of solutions d, max depth m:Incomplete: may wonder down the wrong path. Bad for deep and infinite depth state spaceTime: bm nodes expanded (worst case)Space: bm (just along the current path)Does not guarantee the shortest path. Good when there are many shallow goals.
32 Breadth-first searchIt will first explore all paths of length one, then two and if a solution exists it will find it at the exploration of the paths of length N. There is a guarantee of finding a solution if one exists.It will find the shortest path from the solution, it may not be the best one.
33 Breadth-first search The algorithm: 1. Form one element queue consisting of the root node2. Until the queue is empty or the goal has been reached, determine if the first element in the queue is the goal node.a) If the first element is the goal do nothing.b) If the first element is not the goal node remove the first element from the queue and add the first element's children if any to the back of the queue.3. If the goal node has been found announce success, otherwise announce failure.
34 Breadth-first search Breadth-first search procedure begin open := [Start] /initialiseclose := ;while open <>  doremove leftmost state from open, call it X;if X is a goal then return (success)else begingenerate children of X;put X on close;eliminate children of X on open or close; /loop checkput remaining children on the right end of open /queueendend;return (failure) /no states leftend.
35 Breadth-first searchNode visit order (goal test):Queuing function: enqueue at end ( add expanded node at the end of the list)
36 Breadth-first search Evolution of the open and closed lists  – [ ] [2 3 ] – [ 1 ][3 4 5 ] – [1 2 ][ ] – [1 2 3 ]…………..
37 Implementing Breadth-First and Depth-First Search The lisp implementation of breadth first search maintains the open list as a first in first out (FIFO) structure.(defun breadth-first ()(cond ((null *open*) nil)(t (let ((state (car *open*)))(cond ((equal state *goal*) ‘success)(t (setq *closed* (cons state *closed*))(setq *open* (append (cdr *open*) (generate-descendants state *moves*))) ;*moves*:list of the funcs that generate the moves.(breadth-first)))))))
39 Implementing Breadth-First and Depth-First Search generate-descendants takes a state and returns a list of its children.(defun generate-descendants (state moves)(cond ((null moves) nil)(t (let (child (funcall (car moves) state))(rest (generate-descendants state (cdr *moves*))))(cond ((null child) rest)((member child rest :test #‘equal) rest)((member child *open* :test #‘equal) rest)((member child *closed* :test #‘equal) rest)(t (cons child rest)))))))By binding the global variable *move* to an appropriate list of move functions this search algorithm may be used to search any state space in breadth firstsearch fashion.
40 Breadth-first Evaluation Branching factor b, depth of solution d:Complete: it will find the solution if it existsTime. 1 + b + b2 + …+ bdSpace: bk where k is the current depthSpace is more problem than time in most casesTime is also a major problem nonetheless
41 Heuristic Search Reasons for heuristics - impossible for exact solution,heuristics lead to promising path- no exact solution but an acceptable one- fallible due to limited informationIntelligence for a system with limited processing resourcesconsists in making wise choices of what to do nextHeuristics = Search Algorithm + Measure
42 Hill climbingHill climbing is depth first search with a heuristic measurement that orders choices as nodes are expanded.The algorithm is the same only 2b differs slightly.1. Form a one element queue consisting of the root node2. Until the queue is empty or the goal has been reached, determine if the first element in the queue is the goal node.a) If the first element is the goal do nothing.b) If the first element is not the goal node remove the first element from the queue, sort the first elements children, if any by estimated remaining distance, and add the first element's children if any to the front of the queue.3. If the goal node has been found announce success, otherwise announce failure.
44 Hill climbing Problems that may arise: A local maximum, is a state that is better than all its neighbors, but is not better than some other states farther away. At a local maximum, all moves appear to make things worse.A plateau, A whole set of neighboring states have the same value.It is not possible to determine the best direction.A ridge,. Higher than surrounding area but can not be traversed by single move in any one direction.
45 Hill climbing Some ways of dealing with these: Backtrack: local maximumMake a big jump in one direction to try to get to new section of search space (plateau)Apply two or more rules before doing the test. This corresponds to moving in several directions at once (ridges).
46 Best-first SearchBest-first search is a combination of depth-first and breadth-first search algorithms. Forward motion is from the best open node (most promising) so far, no matter where it is on the partially developed tree.The second step of the algorithm changes as:2. Until the queue is empty or the goal has been reached, determine if the first element in the queue is the goal node.a) If the first element is the goal do nothing.b) If the first element is not the goal node remove the first element from the queue and add the first element's children if any to the queue and sort the entire queue by estimated remaining distance..
47 Best-First Search procedure best_first_search; begin open := [Start]; closed = ;while open <>  doremove leftmost state from open, call it X;if X = goal then return path from Start to Xelse begingenerate children of X; for each child of X docasethe child is not on open or closed:assign child heuristic value;add child to openend;the child is already on open:if the child reached by shorter paththen give state on open shorter paththe child is already on closed:if child reached shorter path thenremove state from closed;end case;put X on closed;re-order states on open by heuristic methodreturn failureend.
48 Example of best-first search 1. open = [A5]; closed = 2. eval A5; open = [B4, C4, D6];closed = [A5]3. eval B4; open = [C4, E5, F5, D6]; closed = [B4, A5]4. eval C4; open = [H3, G4, E5, F5, D6] closed = [C4, B4, A5]5. eval H3; open = [O2, P3, G4, E5, F5, D6]; closed = [H3, C4, B4, A5]6. eval O2; open = [P3, G4, E5, F5, D6]closed = [O2, H3, C4, B4, A5]7. eval P3; the solution is found!B-4C-4D-6IJE-5F-5G-4H-3KLMNO-2P-3QRST
49 Branch and Bound Search Shortest path is always chosen for expansion. The path first reaching the destination is optimalIn order to be certain that supposed solution is not longer than one or more incomplete paths, instead of terminating when a path is found, terminate when the shortest incomplete path is longer than the shortest complete path.
50 Branch and Bound Search To conduct a branch and bound search:1. Form a queue of partial paths. Let the initial queue consist of the zero length, zero step path from the root node to no where.2. Until the queue is empty or the goal has been reached, determine if the first path in the queue reaches the goal node.a) If the first path reaches the goal do nothing.b) If the first path does not reach the goal node,i) remove the first path from the queueii) form new paths from the removed path by extending one stepiii) add the new paths to the queueiv) sort the queue by cost accumulated so far, with least cost paths in front3. If the goal node has been found announce success, otherwise announce failure.
53 Branch and Bound Search Adding underestimates improves efficiencyc(total length) = d(already traveled) + e(distance remaining)If the guesses are not perfect, and a bad overestimate somewhere along the true optimal path may cause us to wonder off that optimal path permanently.But underestimates cannot cause the right path to be overlooked. An underestimate of the distance remaining yields an underestimate of the total path, u(total path length).u(total path length) = d(already traveled) + u(distance remaining)
54 Branch and Bound Search If a total path is found by extending the path with the smallest underestimate repeatedly, no further work need be done once all incomplete path distance estimates are longer than some complete path distance. This is true, because a real distance along a completed path can not be shorter than an underestimate of the distance.To conduct a branch and bound search with underestimates:2b4) sort the queue by the sum of cost accumulated so far and a lower bound estimate of the cost remaining, with the least cost paths in front.
55 A* SearchDynamic-programming principal holds that when looking for the best path from S to G, all paths from S to any intermediate node, I, other than the minimum length path from S to I, can be ignored.The A* procedure is branch and bound search in a graph space with an estimate of remaining distance, combined with dynamic programming principle.If one can show that h(n) never overestimates the cost to reach the goal, then it can be shown that the A* algorithm is both complete and optimal.
56 A* Search To do A* search with lower bound estimates: 2b4) sort the queue by the sum of the cost accumulated so far and a lower bound estimate of the cost remaining, with least cost paths in front. 2b5) If two or more paths reach a common node, delete all those paths except for one that reaches the common node with the minimum cost.
58 Recursive Search in Prolog predicatespath(integer, integer, integer*)clausespath(Z, Z, L).path(X, Y, L):- move(X, Z), not(member(Z, L),path(Z, Y, [Z|L])/*x is the member of the list if X is the head of the list or x is a member of the tail */member(X, [X|T]).member(X, [Y|T]):- member(X, T).goalpath(1, 3, ).
59 Farmer-Wolf-Cabbage Problem A Farmer with his wolf, goat and cabbage come to the edge of a river they wish to cross. There is a boat at the river’s edge, but of course only the farmer can raw it. The boat also can carry only two things. If the wolf is ever left alone with the goat, the wolf will eat the goat. If the goat is ever left alone with the cabbage, the goat will eat the cabbage. Devise a sequence of crossings of the river so that all four characters arrive safely on the other side of the river.The problem implementation.
60 Search Algorithms in LISP Example: Farmer, wolf, goat and cabbage problem.uses depth first searchstates are represented as list of four elements. eg: (w e w e) represents the farmer and the goat on the west bank, and wolf and the cabbage on the east bank.make-state takes as arguments the locations of the farmer, wolf, goat and cabbage and returns a state and four access functions, farmer-side, wolf-side, goat-side, and cabbage-side, which take a state and return the location of an individual.
61 (defun make-state (f w g c)(list f w g c)) (defun farmer-side (state)(nth 0 state))(defun wolf-side (state)(nth 1 state))(defun goat-side (state)(nth 2 state))(defun cabbage-side (state)(nth 3 state))
62 (defun farmer-takes-self(state) (make-state(opposite (farmer-side state))(wolf-side state)(goat-side state)(cabbage-side state)))In the above procedure a new state is returned regardless of its being safe or not.
63 >(safe ‘(w w w w)) ;safe state, return unchanged A safe function should be defined so that it returns nil if a state is not safe. >(safe ‘(w w w w)) ;safe state, return unchanged>(safe ‘(e w w e)) ; wolf eats goat, return nil.(defun safe(sate)(cond((and (equal(goat-side state) (wolf-side state))(not(equal(farmer-side state) (wolf-side state))) nil) ; wolf eats goat((and(equal(goat-side state) (cabbage-side state))(not(equal(farmer-side state) (goat-side state))) nil) ; goat eats cabba(t state)))
64 ;return nil for unsafe states ;filter the unsafe states(defun farmer-takes-self(state)(safe (make-state(opposite (farmer-side state))(wolf-side state)(goat-side state)(cabbage-side state))))(defun opposite( side)(cond ((equal side ‘e) ‘w)((equal side ‘w) ‘e)))
68 (defun path(state goal) (cond((equal state goal) ‘success) (t (or (path (farmer-takes-self state) goal)(path (farmer-takes-wolf state) goal)(path (farmer-takes-goat state) goal)(path (farmer-takes-cabbage state) goal)))))To prevent path from attempting to generate the children of a nil state, it must first check whether the created state is nil. If it is, the path should return nil.In this definition there is the probability of going into a loop, repeating the same states over and over again. Third parameter, been-list, which keeps track of these visited states, is passed to path. member predicate is used to make sure that the current state is not the member of the been-list.
69 (defun path(state goal been-list) (cond ((null state) nil)((equal state goal)(reverse (cons state been-list)))((not (member state been-list :test ‘equal))(or (path (farmer-takes-self state) goal (cons state been-list))(path (farmer-takes-wolf state) goal (cons state been-list))(path (farmer-takes-goat state) goal (cons state been-list))(path (farmer-takes-cabbage state) goal (cons state been list))))))
70 *moves* is a list of functions that generate the moves. In the farmer, goat and cabbage problem *moves* would be defined by(setq *moves* ‘(farmer-takes-self farmer-takes-wolf farmer-takes-goat farmer-takes-cabbage))(defun run-breadth (start goal)(setq *open* (list start))(setq *closed* nil)(setq *qoal* goal)(bradth-first)
71 (defun generate-descendants (state moves) (cond ((null moves) nil) generate-descendants takes a state and returns a list of its children. It also disallows duplicates in the list of children and eliminates any children that are already in the open or closed list.(defun generate-descendants (state moves)(cond ((null moves) nil)(t (let (child (funcall (car moves) state))(rest (generate-descendat state (cdr *moves*))))(cond ((null child) rest)((member child rest :test ‘equal) rest)((member child *open* :test ‘equal) rest)((member child *closed* :test ‘equal) rest)(t (cons child rest)))))))Rest is the list of childrenBy binding the global variable *move* to an appropriate list of move functions this search algorithm may be used to search any state space in breadth first search fashion.
72 Breadth-First and Depth-First Search The lisp implementation of breadth first search maintains the open list as a first in first out (FIFO) structure. Open, closed and goal are defined as global variables.(defun breadth-first ()(cond ((null *open*) nil)(t (let ((state (car *open*)))(cond ((equal state *goal*) ‘success)(t (setq *closed* (cons state *closed*))(setq *open* (append (cdr *open*) (generate-descendants state *moves*)))(breadth-first)))))))
73 ReferencesNilsson, N.J. Artificial Intelligence: A new Synthesis, Morgan Kaufmann, 1998
Your consent to our cookies if you continue to use this website.