Informed Search Strategies

Slides:



Advertisements
Similar presentations
BEST FIRST SEARCH - BeFS
Advertisements

Chapter 4: Informed Heuristic Search
Review: Search problem formulation
Heuristic Search techniques
Greedy best-first search Use the heuristic function to rank the nodes Search strategy –Expand node with lowest h-value Greedily trying to find the least-cost.
Iterative Deepening A* & Constraint Satisfaction Problems Lecture Module 6.
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
5-1 Chapter 5 Tree Searching Strategies. 5-2 Satisfiability problem Tree representation of 8 assignments. If there are n variables x 1, x 2, …,x n, then.
CS 484 – Artificial Intelligence1 Announcements Department Picnic: today, after class Lab 0 due today Homework 2 due Tuesday, 9/18 Lab 1 due Thursday,
Chapter 4 Search Methodologies.
Search in AI.
Artificial Intelligence Lecture
Review: Search problem formulation
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
Find a Path s A D B E C F G Heuristically Informed Methods  Which node do I expand next?  What information can I use to guide this.
Using Search in Problem Solving
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
Problem Solving and Search in AI Heuristic Search
State-Space Searches. 2 State spaces A state space consists of –A (possibly infinite) set of states The start state represents the initial problem Each.
State-Space Searches.
Informed Search Idea: be smart about what paths to try.
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
Graphs II Robin Burke GAM 376. Admin Skip the Lua topic.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Various Heuristic Searches Lecture Module 5. Heuristic Search Heuristics are criteria for deciding which among several alternatives be the most effective.
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l inference l all learning.
Informed (Heuristic) Search
Computer Science CPSC 322 Lecture 9 (Ch , 3.7.6) Slide 1.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques.
Informed Search Strategies Lecture # 8 & 9. Outline 2 Best-first search Greedy best-first search A * search Heuristics.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
Advanced Artificial Intelligence Lecture 2: Search.
Basic Problem Solving Search strategy  Problem can be solved by searching for a solution. An attempt is to transform initial state of a problem into some.
Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Informed Search Reading: Chapter 4.5 HW #1 out today, due Sept 26th.
Informed Search CSE 473 University of Washington.
Search Techniques CS480/580 Fall Introduction Trees: – Root, parent, child, sibling, leaf node, node, edge – Single path from root to any node Graphs:
Searching for Solutions
Fahiem Bacchus © 2005 University of Toronto 1 CSC384: Intro to Artificial Intelligence Search II ● Announcements.
CSCI 4310 Lecture 4: Search 2 – Including A*. Book Winston Chapters 4,5,6.
Artificial Intelligence Lecture No. 8 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
For Monday Read chapter 4 exercise 1 No homework.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Lecture 3: Uninformed Search
Review: Tree search Initialize the frontier using the starting state
Heuristic Search A heuristic is a rule for choosing a branch in a state space search that will most likely lead to a problem solution Heuristics are used.
Artificial Intelligence Problem solving by searching CSC 361
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
Informed search algorithms
BEST FIRST SEARCH -OR Graph -A* Search -Agenda Search CSE 402
Heuristic Search Methods
Introducing Underestimates
HW 1: Warmup Missionaries and Cannibals
More advanced aspects of search
Informed Search Idea: be smart about what paths to try.
State-Space Searches.
Heuristic Search Generate and Test Hill Climbing Best First Search
State-Space Searches.
HW 1: Warmup Missionaries and Cannibals
Reading: Chapter 4.5 HW#2 out today, due Oct 5th
State-Space Searches.
Informed Search Idea: be smart about what paths to try.
Presentation transcript:

Informed Search Strategies 5. Branch & Bound search (Uniform cost search): It expands the least-cost partial path. Sometimes, it is called uniform cost search. Function g(X) assigns some cumulative expense to path from Start node to X by applying the sequence of operators that composes it. For example, in salesman traveling problem g(X), may be the actual distance from Start to current node X. During search there are many incomplete paths contending for further consideration. The shortest one is extended one level, creating as many new incomplete paths as there are branches. These new paths along with old ones are sorted on the values of function g and again the shortest path is extended.

Since the shortest path is always chosen for extension, the path first reaching the destination is certain to be optimal but it is not guaranteed to find the solution quickly. If g(X) = 1 for all operators, then it degenerates to simple Breadth-First search. From AI point of view, it is as bad as depth first and breadth first. This can be improved if we augment it by dynamic programming i.e. delete those nodes which are redundant. 6. Hill Climbing Quality Measurements turn Depth-First search into Hill climbing (Variant of generate and test strategy)

Generate and Test Algorithm Start - Generate a possible solution - Test to see if it is goal. - If not go to start else quit End Search efficiency may improve if there is some way of ordering the choices so that the most promising nodes are explored first. Moving through a tree of paths using hill climbing proceed as in depth-first search but order the choices according to some heuristic (measure of remaining distance from current to goal state). Heuristics are criteria for deciding which among several alternatives, be the most effective in order to achieve some goal.

Algorithm (Simple hill climbing) They are tentative methods suitable to applications. For example, straight line (as the crow flies) distance between two cities is an example of a heuristic measure of remaining distance. Algorithm (Simple hill climbing) Store initially the root node in a OPEN list called stack; Found = false; While ( OPEN  empty and Found = false) Do { Remove the top element from OPEN list and call it NODE; If NODE is the goal node, then Found = true else find SUCCs, if any of NODE, sort SUCCs by estimated cost from it to goal state and add them to the front of OPEN list. } /* end while */ If Found = true then return Yes otherwise return No & Stop

Problems in hill climbing: What to do if the process gets to a position that is not a solution but from which there is no move that improves things. This will happen if we have reached a local maximum, a plateau or a ridge Local maximum: It is a state that is better than all its neighbors but is not better than some other states farther away. All moves appear to be worse. Solution to this is backtrack to some earlier state and try going in different direction. Plateau: It is a flat area of the search space in which, a whole set of neighboring states have the same value. It is not possible to determine the best direction . Here make a big jump to some direction and try to get to new section of the search space. Ridge: It is an area of search space that is higher than surrounding areas, but that can not be traversed by single moves in any one direction. (Special kind of local maxima). Here apply two or more rules before doing the test i.e., moving in several directions at once.

7. Beam Search It is like a BFS because Beam Search progresses level by level. It only moves downward from the best W nodes at each level. Other nodes are ignored. Best nodes are decided on the cost associated with the node. W is called width of beam search. If B is the branching factor, there will be only WB nodes under consideration at any depth but only W nodes will be selected. 8. Best First Search (Expands the Best partial path): Forward motion is from the best open node so far in the partially developed tree.

Algorithm (Best First Search) Form a list OPEN consisting of root node; CLOSED =  Initialize Found = false; While (OPEN   and Found = false) Do { If the first element is the goal node, then Found = true else remove it from OPEN list and put it in CLOSED list. Add its successor, if any, in OPEN list. Sort the entire list by the value of some heuristic function that assigns to each node, the estimate to reach the goal node } If the Found = true, then announce the success else announce failure. Stop.

If a state has been generated before, change the parent if this new path is better than the previous one. Update the cost. Remarks In hill climbing, sorting is done on the successors nodes whereas in the best first search sorting is done on the entire queue. It is not guaranteed to find an optimal solution, but normally it finds some solution faster than any other methods. The performance varies directly with the accuracy of the heuristic evaluation function.

Termination conditions: Instead of terminating when a path is found, terminate when the shortest incomplete path is longer than the shortest complete path. In most problems of practical interest, the entire search space graph will be too large to be explicitly represented in a computer's memory. The problem specifications however will contain rules that will allow a computer program to generate the graph (tree) incrementally from the start node in any desired direction

9. A* Algorithm A* Algorithm (“Aystar”) (Hart, 1972) is a combination of branch & bound and best search, combined with the dynamic programming principle. The heuristic function (or Evaluation function) for a node N is defined as f(N) = g(N) + h(N) The function g is measure of the cost of getting from the Start node (initial state) to the current node. (It is sum of costs of applying each of the rules that were applied along the best path to the node) The function h is an estimate of additional cost of getting from the current node to the Goal node (final state). This is the place where knowledge about the problem domain is exploited Generally, A* algorithm is called OR graph / tree search algorithm.

Algorithm (A*) Initialization OPEN list with initial node; CLOSED list as empty; g = 0, f = h, Found = false; While (OPEN ≠ empty and Found = false ) Do { Remove the node with the lowest value of f from OPEN to CLOSED and call it as Best_Node. If Best_Node = Goal state then Found = true If (Best_Node ≠ Goal node ) then { Generate the SUCC of Best_Node For each SUCC do { - Establish SUCC’s parent link /* This link will help to recover path once the solution is found */

- Compute g (SUCC) = g (Best_Node) + cost of getting from Best_Node to SUCC. - If SUCC  OPEN /* already being generated but not processed */ then { Call the matched node as OLD and add it in the list of Best_Node. Throw the SUCC and change the parent of OLD, if required. If g (SUCCESSOR) < g (OLD) then make parent of OLD to be Best_Node and change the values of g and f for OLD If g (SUCCESSOR) >= g (OLD) then do nothing }

- If SUCCESSOR  CLOSED then { Call the matched node as OLD and add it in the list of Best_Node. Throw the SUCCESSOR and change the parent of OLD, if required If g (SUCCESSOR) < g (OLD) then make parent of OLD to be Best_Node and change the values of g and f for OLD. Propogate the change to OLD’s children using depth first If g (SUCCESSOR) >= g (OLD) then do nothing }

- If SUCCESSOR  OPEN or CLOSED { Add it to the list of Best_Node’s successors Compute f (SUCCESSOR) = g (SUCCESSOR) + h (SUCCESSOR) Put SUCCESSOR on OPEN list with its f value } } /* End while */ If Found = true then report the best path else report failure Stop

Behavior of A* search Underestimation: If we can guarantee that h never over estimates actual value from current to goal, then A* algorithm is guaranteed to find an optimal path to a goal, if one exists. Consider the following example.

Suppose we resolve in favor of E, the path currently we are expanding. Assume the cost of all arcs is 1. We see that f (E) = 5 = f (C) Suppose we resolve in favor of E, the path currently we are expanding. Clearly expansion of F( f = 6) is stopped and we will now expand C. Thus we see that by underestimating h(B), we have wasted some effort but eventually discovered that B was farther away than we thought. Then we go back and try another path, and will find optimal path.

Overestimation: Now consider another situation. We expand B to E, E to F and F to G for a solution path of length 4. But suppose that there a direct path from D to a solution giving a path of length 2. We will never find it because of overestimating h(D). We may find some other worse solution without ever expanding D. So by overestimating h, we can not be guaranteed to find the cheaper path solution.

A heuristic function h is monotone if Admissibility of A* A search algorithm is admissible, if for any graph, it always terminates in an optimal path from initial state to goal state, if path exists. If heuristic function h is underestimate of actual value from current state to goal state, then the it is called admissible function. So, we can say that A* always terminates with the optimal path in case h(x) is an admissible heuristic function. Monotonicity: A heuristic function h is monotone if 1.  states Xi and Xj such that Xj is successor of Xi h(Xi)– h(Xj) ≤ cost (Xi, Xj) i.e., actual cost of going from Xi to Xj 2. h (goal) = 0

In this case, heuristic is locally admissible i. e In this case, heuristic is locally admissible i.e., consistently find the minimal path to each state they encounter in the search. The monotone property in other words is that search space is every where locally consistent with heuristic function employed i.e., reaching each state along the shortest path from its ancestors. With monotonic heuristic, if a state is rediscovered, it is not necessary to check whether the new path is shorter. Each monotonic heuristic is admissible A cost function f(n) is monotone if f(n) ≤ f(s(n)), n, where s(n) is a successor of n. For any admissible cost function f, we can construct a monotone admissible function.

Example: Solve Eight puzzle problem using A* algorithm Initial State Goal State 2 8 3 1 2 3 1 6 4 8 4 7 5 7 6 5 Evaluation function f (X) = g (X) + h(X) h (X) = the numbers if tiles not in their final position in a given state X g(X) = depth of node X in the search tree Initial node has f(initial_node) = 4 Apply A* algorithm to solve it. The choice of evaluation function critically determines search results.

Harder problems (8 puzzle) can’t be solved by this heuristics. Initial State Goal State 2 1 6 1 2 3 4 8 8 4 7 5 3 7 6 5 A better estimate function is to be thought. (Nilsson) 10. Iterative–Deepening A* (IDA*) DFID can be combined with a best first heuristic search such as A*. Here successive iterations correspond not to increasing depth of search but rather to increasing values of the total cost of a path.

Algorithm works as follows: At each iteration, perform a DFS cutting off a branch when its total cost (g+h) exceeds a given threshold. This threshold starts at the estimate of the cost of the initial state, and increases for each iteration of the algorithm. At each iteration, the threshold used for the next iteration is the minimum cost of all values exceeded the current threshold.

Given an admissible monotone cost function, IDA Given an admissible monotone cost function, IDA* will find a solution of least cost or optimal solution if one exists. IDA* not only finds cheapest path to a solution but uses far less space than A* but it expands approximately the same number of nodes as A* in a tree search. An additional benefit of IDA* over A* is that it is simpler to implement, as there are no open and closed lists to be maintained. A simple recursion performs DFS inside an outer loop to handle iterations.