Download presentation

Presentation is loading. Please wait.

Published byRonan Stephans Modified about 1 year ago

1
Notes for CS3310 Artificial Intelligence Part 8: Heuristic search Prof. Neil C. Rowe Naval Postgraduate School Version of January 2006

2
Search Search = reasoning about "states". A state is a set of facts true at some instant of time. That is, a state is a snapshot of facts. (Or to save space, just the relevant facts in some problem.) Search and states permit reasoning about time. In artificial intelligence, states are discrete, and transitions between states ("branches") are discrete (all at once) too. Search among states permits hypothetical reasoning about effects of actions. Each possible action can result in a different state. A state can be represented by a linked list. Possible state transitions can be represented by a directed graph, a "search graph". Search is the oldest area of artificial intelligence.

3
More search terms State: a set of facts (whose predicate names are usually past participles, e.g. "opened", and abstract nouns, e.g. "location"). Successor state: an immediate next state. Operator: a label on a state transition (usually a verb, e.g. "open") Starting state: the first state of a search. Goal states: states in which objectives have been achieved, so search can stop. Level: the number of state transitions from the starting state to a given state. Preconditions of an operator: facts that must occur in a state before you can use the operator to it. Postconditions of an operator: facts that become true after you use the operator from a state.

4
Search examples Intercity route planning: given a road map, find the best route between two cities for a car. City route planning: Given intersections A and B, find a route between them for a car. Car repair: Given part to repair, access it, fix it, and put the car back together. Smarter forward chaining: Forward chaining but making an intelligent decision at each step about which fact to pursue next. (The goal is to infer interesting facts sooner and save time.) Scheduling classes: At a university, assign classes to times so every student can take the classes they want. Mission planning: Given a model of targets and costs of attacking them, and a model of enemy responses, figure the best sequence of attacking actions.

5
Search example The state is in brackets; air-mileage to San Francisco in curly brackets; mileages in parentheses; road names are the operators. Prefer westmost branch. Compute order of visit and solution path: Use abbreviations: M=Monterey, SC=Santa Cruz, G=Gilroy, SJ=San Jose, SF=San Francisco, Oak=Oakland. [location(San Francisco)] {0} [location(Santa Cruz)] {65} [location(San Jose)] {45} [location(Oakland)] {15} [location(Gilroy)] {74} [location(Monterey)] {100} I-80 (20) Cal-1 (85) Cal-17 (30) I-280 (50) Cal-17 (50) US-101 (35) Cal-1 (45) US-101 (35)

6
Work page for comparison of search algorithms

7
A search graph for car repair Complete the search graph to the goal state [ok(battery), in(battery,car), cabled(battery), bolted(battery)]. Possible operators: "attach cables", "attach bolts", and "replace battery". Note: not(cabled(battery)) is a precondition of the action "remove battery"; ok(battery) is a postcondition of the action "replace battery"; facts not mentioned are assumed false in a state. [dead(battery), in(battery,car), bolted(battery), cabled(battery)] [dead(battery), in(battery,car), cabled(battery)] [dead(battery), in(battery,car), bolted(battery)] [dead(battery), in(battery,car)] remove bolts remove cables remove bolts

8
Work page for car battery problem

9
Classic search methods These differ in the order in which states are "visited". "Visit” means "find the successors of." Depth-first: visit an unvisited successor of current state, else back up and try a new successor of previous state. Breadth-first: visit next-found unvisited state at same level, else the first-found state at next level. Hill climbing: visit the unvisited successor of current state which has lowest evaluation-function value, else back up. Best-first: visit the known unvisited state with lowest evaluation-function value, which can be anywhere in search graph. Branch-and-bound: like best-first but use cost from start to state. A*: like best-first but use sum of evaluation-function value and cost from start to state.

10
Notes on the search methods Search stops when you visit a goal state (that is, when you try to find its successors). Heuristics can give criteria for choosing the next state with all the search methods (and break ties in best-first and A*). Never visit the same state twice (or add a duplicate state to the agenda) -- but for A*, consider new routes to the same state. Depth-first can be implemented with a stack; breadth-first with a queue; and the other methods with a sorted "agenda" of states known but not yet visited. When you find successors of a state, set the "backpointer" for each successor to point to the state. When done, follow backpointers from the goal state to the start state to get the solution path. Agendas should store for each state its description (list of facts true for it), its backpointer, and any numbers associated with the state. With A* search, if the evaluation function is a lower bound on the subsequent cost, the first path found to a state is guaranteed to be the best path to that state.

11
Heuristics = Any piece of nonnumeric advice for choosing among possible branches in a search. Heuristics are a form of meta-rules. Examples: For city route planning, prefer not to turn twice in succession. For car repair, do work from top of the engine before work from below the engine. For buying decisions, don't buy anything advertised on television. In mission planning, withdraw from an engagement if you do not have at least 2-to-1 superiority in assets.

12
Search terms that are easy to confuse evaluation function: a "distance-to-goal" measure, a number. A function of the state. Makes search more intelligent, if you pick direction to search with lowest evaluation function. heuristic: a non-numeric way for choosing a direction for search at a state. cost function: a "cost-accumulated" measure, a number. A function of the states in a path. Note: Evaluation function concerns future; cost function concerns past.

13
An example route-planning program, not so good Microsoft Automap Road Atlas Route Planning Demo The quickest route from Monterey, California, to San Francisco, California (going the speed limit), is as follows: Total Time: 2 hours 31 minutesTotal Distance: 121 miles TimeRoadForDirTowards 00:00DEPART Monterey (California)2 milesEon the Fremont St 00:04Go onto S117 milesESanta Cruz 00:23At Castroville stay on the S114 milesNSanta Cruz 00:39At Freedom stay on the S115 milesNWSanta Cruz 00:55At Santa Cruz turn S923 milesNSaratoga right onto 01:23Turn left onto S3514 milesNWoodside 01:40Turn right onto S845 milesERedwood City 01:46At Woodside stay on theS841/2 mileNERedwood City 01:47Turn left onto I28016 milesWSan Mateo 02:07At Burlingame stay on the I2801 mileNBrisbane 02:09Turn off onto I3801 mileESan Bruno 02:10At San Bruno stay on theI3801 mileE 02:12Turn off onto U1013 milesNSouth San Francisco 02:17At South San Francisco U1013 milesNEBrisbane stay on the 02:22At Brisbane stay on the U1013 milesN 02:26Turn off onto I803 milesESan Francisco 02:31ARRIVE San Francisco(California) If you are impressed with this demo, you'll be wowed by our full product, Microsoft Automap Road Atlas 4.0. We invite you to join our mailing list, read more information about Automap Road Atlas, or give us feedback on this demo.

14
More about evaluation functions What if states aren’t locations? Then add "amount-of-work- necessary" numbers for certain facts to get the evaluation-function value. For instance for the car repair, start at 0 and then: Add 10 if a dead(battery) fact is in the state; Add 5 if no bolted(battery) fact is in the state; Add 3 if no cabled(battery) fact is in the state. So for instance: [dead(battery), in(battery,car), bolted(battery), cabled(battery)] has evaluation 10 [dead(battery), in(battery,car), cabled(battery)] has evaluation 15 [dead(battery), in(battery,car)] has evaluation 18 [ok(battery), in(battery,car)] has evaluation 8 [ok(battery), in(battery(car), cabled(battery)] has evaluation 5 [ok(battery), in(battery(car), bolted(battery), cabled(battery)] has 0

15
Problems with evaluation functions Evaluation functions don't always work well. Some classic problems: –"pond" problem: you may find a local (instead of global) minimum –"valley" problem: most directions may be worse, and it may be hard to find the few directions that are better –"plateau" problem: you may have lots of ties in the evaluation function value –"pit" problem: evaluation function may not be useful unless you're extremely close to the goal In these case a set of heuristics may be better.

16
Another kind of cost function: The probability of an interpretation of a situation Overall probability can be product of near-independent factors. So to find the best interpretation, minimize its logarithm: Do this by an A* search where branches cost: This is important in language understanding. For instance, interpret "navy fighter wing" with probabilities: of each word sense of "navy” ; of each word sense of "fighter”; of each word sense of "wing”; that "navy" is a kind of "fighter”; that "navy" is part of "fighter”; that "fighter" is part of "navy”; that "navy" is owner of "fighter”; that "fighter" is a kind of "wing”; that "wing" is a part of "fighter” that "wing" is a kind of "fighter”. In a goal state each word has a sense and all senses are linked by relationships.

17
Stochastic state graph for rootkit-installation example 20 55 51 0.1 0.9 0.1 0.6

18
Definitions needed to solve a search problem (1) successor: Defines all possible state transitions (with successor(State,Newstate) in Prolog; in Java, successor(State) which returns the array Newstates). (2) goalreached: Defines all possible goal states. (3) eval (for best-first and A*): Returns the evaluation function value for any state. (4) piece_cost (for A*): Returns the cost between any pair of states. (5) To start search, call depthsearch, breadthsearch, bestsearch, or astarsearch with argument the starting state; it should return the solution path.

19
Example Prolog definition of a search problem: finding a route to San Francisco successor(R1,R2) :- successor2(R1,R2). successor(R1,R2) :- successor2(R2,R1). successor2(monterey,gilroy). successor2(monterey,santa_cruz). successor2(gilroy,san_jose). successor2(santa_cruz,san_jose). successor2(santa_cruz,san_francisco). successor2(san_jose,oakland). successor2(san_jose,san_francisco). successor2(oakland,san_francisco). goalreached(san_francisco). eval(monterey,100). eval(gilroy,74). eval(santa_cruz,65). eval(san_jose,45). eval(oakland,15). eval(san_francisco,0).

20
Example Prolog search definition, cont. /* This totals up the pair_cost2 numbers along a path */ piece_cost(X,Y,C) :- piece_cost2(X,Y,C). piece_cost(X,Y,C) :- piece_cost2(Y,X,C). piece_cost2(monterey,gilroy,35). piece_cost2(monterey,santa_cruz,45). piece_cost2(gilroy,san_jose,35). piece_cost2(santa_cruz,san_jose,30). piece_cost2(santa_cruz,san_francisco,85). piece_cost2(san_jose,oakland,50). piece_cost2(san_jose,san_francisco,50). piece_cost2(oakland,san_francisco,20). Then give the starting state, and search returns to sequence of states to the goal, like: ?- depthsearch(monterey,A). A=[san_francisco,san_jose,gilroy,monterey]

21
Another example: the robot housekeeper problem Define behavior of a housekeeping robot. 1. In the starting state: –The robot is at the trash chute (use predicate at); –Offices 1 and 2 need dusting (use dusty); –Office 1 has trash in its trash basket, but Office 2 doesn't (use trashy); –The carpet in Office 1 does not need vacuuming, while the carpet in Office 2 does (use vacuumable). Write the starting state using predicate expressions. 2. A goal state is one in which all these are true: Every office is dusted; every carpet is vacuumed; every trash basket is emptied; the robot is not holding any basket (use holdingbasketfrom); and the robot is at office 1. Give the conditions defining a goal state in logical terms.

22
The cost function for the robot housekeeper 3. Assume these energy costs: –10 units to vacuum a single room (the robot has a built-in vacuum); –6 units to dust a room (the robot has a built-in duster); –3 units to pick up a trash basket (the robot can hold several at once); –1 unit to put down a trash basket; –3 units to travel between offices; –8 units to travel between an office and the trash chute; –5 units to dispose of the contents of a trash basket down the trash chute. Assume the last action taken by the robot is the argument to a lastact fact in each state. Define a piece_cost function.

23
The eval function for the robot housekeeper 4. For A* search, we prefer an evaluation function that is a lower bound on the subsequent cost. So we'll add up numbers ("piece_eval"s) for each expression in the state. The default number will be zero, but "dusty(X)" for instance will have a piece_eval of 6. Compute the other piece_evals.

24
Successor definition for the robot housekeeper 5. The successor definition must follow these requirements: a. The robot can vacuum the floors, dust, and empty the trash baskets. b. Vacuuming the floor generates dust that goes into the trash basket of the room it is in. c. Emptying the trash baskets in each room requires a trip to the trash chute. d. It doesn't make sense to vacuum or dust if the room isn't vacuumable or dusty respectively. e. Dusting puts dust on the floor, requiring vacuuming. f. It doesn't make sense to vacuum if a room is dusty. g. It doesn't make sense to pick up a trash basket if a room is vacuumable or dusty. h. A lastact fact in each state should hold what the last action was.

25
Write "successor" definition for the "putdown", "dust", and "go" actions Example: For "dispose(Basket)” (meaning to empty the trash basket whose name is Basket). Preconditions: you are at the chute, have a basket, and the basket contains trash. Postconditions: Delete the fact the basket has trash, delete the old “lastact” fact, and add a new “lastact” fact with argument “dispose(Basket)”.

26
Successor definitions in Prolog successor(S,NS):-member(at(chute),S), member(fullbasket(X),S), member(holdingbasketfrom(X),S), delete(S, fullbasket(X),S2), delete(S2, lastact(A),S3), NS=[lastact(dispose(X)) | S3]. successor(S,NS) :- member(at(X),S), \+ member(dusty(X),S), member(vacuumable(X),S), delete(S, vacuumable(X), S2), delete(S2, fullbasket(X), S3), delete(S3, lastact(A), S4), NS=[lastact(vacuum(X)), fullbasket(X) | S4]. successor(S,NS) :- member(at(X),S), \+ member(dusty(X),S), \+ member(holdingbasketfrom(X),S), \+ member(vacuumable(X),S), member(fullbasket(X),S), delete(S, lastact(A), S2), NS=[lastact(pickup(X)), holdingbasketfrom(X) |S2]. successor(S,NS) :- member(at(X),S), member(dusty(X),S), delete(S, dusty(X), S2), delete(S2, vacuumable(X), S3), delete(S3, lastact(A), S4), NS=[lastact(dust(X)), vacuumable(X) | S4]. successor(S,NS) :- member(at(X),S), member(holdingbasketfrom(X),S), delete(S, holdingbasketfrom(X), S2), delete(S2, lastact(A), S3), NS=[lastact(putdown(X)) | S3]. successor(S,NS) :- member(at(X),S), delete(S, at(X), S2), places(PL), member(Y,PL),\+ X=Y, delete(S2, lastact(A), S3), NS=[lastact(go(X,Y)), at(Y) | S3].

27
Eval function in Prolog eval([],0). eval([X|L],N) :- piece_eval(X,N2), eval(L,N3), N is N2+N3. piece_eval(dusty(X),6). piece_eval(vacuumable(X),10). piece_eval(fullbasket(X),8). piece_eval(holdingbasketfrom(X),1). piece_eval(P,0)

28
Cost function in Prolog cost([_],0). cost([S1,S2|L],C) :- S1=[FS1|_], S2=[FS2|_], piece_cost(FS1,FS2,C2), cost([S2|L],C3), C is C2+C3, !. piece_cost(lastact(vacuum(X)),_,10). piece_cost(lastact(dust(X)),_,6). piece_cost(lastact(pickup(X)),_,3). piece_cost(lastact(dispose(X)),_,5). piece_cost(lastact(putdown(X)),_,1). piece_cost(lastact(go(chute,Y)),_,8). piece_cost(lastact(go(X,chute)),_,8). piece_cost(lastact(go(X,Y)),_,3). piece_cost(lastact(none),_,0).

29
Statistics on the housekeeping problem These programs all eliminate states whose fact lists are permutations of another. "Cells" means the number of cons cells created. Breadth-first search ran extremely slowly. All runs were on SunOS machine ai9 in 1990.

30
The depth-first search algorithm Initialize S to current state; initialize stack to hold starting state. Until S is a goal state: –Put (“push”) S onto the stack; –Find the successors of S; –Set S to its best successor (as per heuristics) that is not in the stack; –If no such successor, remove (“pop”) the last state P from stack and go to a different successor of P. The solution path is the stack items when goal is found.

31
The breadth-first search algorithm Erase the oldstate array; initialize queue to the starting state with a null backpointer. Until the first state on the queue is a goal state: –Remove the first item on the queue and call it S. –If S is a goal state, stop. –Otherwise, add successors of S not already on the agenda or in the oldstate array to the end of the queue, using heuristics to determine the order you add them; give each a backpointer to S. –Add S to the oldstate array. The solution path is found by creating a list of the states encountered in following the backpointers from the goal state (using the oldstate array), then reversing the list.

32
The A* search algorithm Initialize agenda to just the starting state with null backpointer and starting-state evaluation-function value. Loop until you pick a goal state: –Remove the first (best) item on the agenda and call it S. –If S is a goal state, stop. –Otherwise, insert successors of S into the agenda if they are not already in the agenda or in the oldstate array, maintaining sorted order of total evaluation. Give each successor state T a backpointer to S and a total evaluation U(T) = U(S) + C(S,T) + E(T) - E(S) where C is the cost from S to T and E is the evaluation function. If T is already on the agenda and this U is better, replace the state on the agenda, else ignore. –Add S to the oldstate array. Create the solution path by following backpointers from the goal state, then reversing the list.

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google