Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSC 8520 Spring 2010. Paula Matuszek CS 8520: Artificial Intelligence Solving Problems by Searching Paula Matuszek Spring, 2010 Slides based on Hwee Tou.

Similar presentations


Presentation on theme: "CSC 8520 Spring 2010. Paula Matuszek CS 8520: Artificial Intelligence Solving Problems by Searching Paula Matuszek Spring, 2010 Slides based on Hwee Tou."— Presentation transcript:

1 CSC 8520 Spring 2010. Paula Matuszek CS 8520: Artificial Intelligence Solving Problems by Searching Paula Matuszek Spring, 2010 Slides based on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are in turn based on Russell, aima.eecs.berkeley.edu/slides-pdf. Diagrams are based on AIMA.

2 2 CSC 8520 Spring 2010. Paula Matuszek Problem-Solving Agents A goal-based agent is essentially solving a problem –Given some state and some goal, –Figure out how to get from the current state to the goal –By taking some sequence of actions. The problem is defined as a state space and a goal. Solving the problem consists of searching the state space for the sequence of actions that leads to the goal. 2

3 3 CSC 8520 Spring 2010. Paula Matuszek Example: Romania On holiday in Romania; currently in Arad. Flight leaves tomorrow from Bucharest Formulate goal: –be in Bucharest Formulate problem: –states: various cities –actions: drive between cities Find solution: –sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest

4 4 CSC 8520 Spring 2010. Paula Matuszek Example: Romania

5 5 CSC 8520 Spring 2010. Paula Matuszek Single-state problem formulation A problem is defined by five items: 1. initial state e.g., "at Arad" 2. actions or successor function S(x) = set of action–state pairs e.g., S(Arad) = {, … } 3. transition model: new state resulting from action 4. goal test, can be explicit, e.g., x = "at Bucharest" implicit, e.g., Checkmate(x) 5. path cost (additive) e.g., sum of distances, number of actions executed, etc. c(x,a,y) is the step cost, assumed to be ≥ 0 A solution is a sequence of actions leading from the initial state to a goal state

6 6 CSC 8520 Spring 2010. Paula Matuszek Selecting a state space Real world is absurdly complex  state space must be abstracted for problem solving (Abstract) state = set of real states (Abstract) action = complex combination of real actions –e.g., "Arad  Zerind" represents a complex set of possible routes, detours, rest stops, etc. For guaranteed realizability, any real state "in Arad“ must get to some real state "in Zerind" (Abstract) solution = –set of real paths that are solutions in the real world Each abstract action should be "easier" than the original problem

7 7 CSC 8520 Spring 2010. Paula Matuszek Vacuum world state space graph States? Actions? Transition Models? Goal Test? Path Cost?

8 8 CSC 8520 Spring 2010. Paula Matuszek Vacuum world state space graph states? integer dirt and robot location actions? Left, Right, Suck transition models? In A, in B, clean goal test? no dirt at all locations path cost? 1 per action

9 9 CSC 8520 Spring 2010. Paula Matuszek Example: The 8-puzzle states? actions? transition model? goal test? path cost?

10 10 CSC 8520 Spring 2010. Paula Matuszek Example: The 8-puzzle states? locations of tiles actions? move blank left, right, up, down transition model? square with tile moved. goal test? = goal state (given) path cost? 1 per move [Note: optimal solution of n-Puzzle family is NP-hard]

11 11 CSC 8520 Spring 2010. Paula Matuszek Search The basic concept of search views the state space as a search tree –Initial state is the root node –Each possible action leads to a new node defined by the transition model –Some nodes are identified as goals Search is the process of expanding some portion of the tree in some order until we get to a goal node The strategy we use to choose the order to expand nodes defines the type of search

12 12 CSC 8520 Spring 2010. Paula Matuszek Tree search algorithms Basic idea: –offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states)

13 13 CSC 8520 Spring 2010. Paula Matuszek Tree search example

14 14 CSC 8520 Spring 2010. Paula Matuszek Tree search example

15 15 CSC 8520 Spring 2010. Paula Matuszek Tree search example

16 16 CSC 8520 Spring 2010. Paula Matuszek Implementation: general tree search

17 17 CSC 8520 Spring 2010. Paula Matuszek Implementation: states vs. nodes A state is a (representation of) a physical configuration A node is a data structure constituting part of a search tree includes state, parent node, action, path cost g(x), depth The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states.

18 18 CSC 8520 Spring 2010. Paula Matuszek Search strategies A search strategy is defined by picking the order of node expansion. (e.g., breadth-first, depth-first) Strategies are evaluated along the following dimensions: –completeness: does it always find a solution if one exists? –time complexity: number of nodes generated –space complexity: maximum number of nodes in memory –optimality: does it always find a least-cost solution? Time and space complexity are measured in terms of –b: maximum branching factor of the search tree –d: depth of the least-cost solution –m: maximum depth of the state space (may be infinite)

19 19 CSC 8520 Spring 2010. Paula Matuszek Summary Problem-solving agents search through a problem or state space for an acceptable solution. A state space can be specified by 1. An initial state 2. Actions 3. A transition model or successor function S(x) describing the results of actions 4. A goal test or goal state 5. A path cost A solution is a sequence of actions leading from the initial state to a goal state The formalization of a good state space is hard, and critical to success. It must abstract the essence of the problem so that –It is easier than the real-world problem. –A solution can be found. –The solution maps back to the real-world problem and solves it.

20 20 CSC 8520 Spring 2010. Paula Matuszek Uninformed search strategies Uninformed search strategies use only the information available in the problem definition Breadth-first search Uniform-cost search Depth-first search Depth-limited search Iterative deepening search

21 21 CSC 8520 Spring 2010. Paula Matuszek Implementation: general tree search

22 22 CSC 8520 Spring 2010. Paula Matuszek Breadth-first search Expand shallowest unexpanded node Implementation: –fringe is a FIFO queue, i.e., new successors go at end

23 23 CSC 8520 Spring 2010. Paula Matuszek Breadth-first search Expand shallowest unexpanded node Implementation: –fringe is a FIFO queue, i.e., new successors go at end

24 24 CSC 8520 Spring 2010. Paula Matuszek Breadth-first search Expand shallowest unexpanded node Implementation: –fringe is a FIFO queue, i.e., new successors go at end

25 25 CSC 8520 Spring 2010. Paula Matuszek Breadth-first search Expand shallowest unexpanded node Implementation: –fringe is a FIFO queue, i.e., new successors go at end

26 26 CSC 8520 Spring 2010. Paula Matuszek Properties of breadth-first search Complete? Yes (if b is finite) Time? 1+b+b 2 +b 3 +… +b d + b(b d -1) = O(b d+1 ) Space? O(b d+1 ) (keeps every node in memory) Optimal? Yes (if cost = 1 per step) Space is the bigger problem (more than time)

27 27 CSC 8520 Spring 2010. Paula Matuszek Uniform-cost search Expand least-cost unexpanded node Implementation: –fringe = queue ordered by path cost Equivalent to breadth-first if step costs all equal Complete? Yes, if step cost >= epsilon (otherwise can loop) Time and space? O(b ceiling(C*/ epsilon) ) where C * is the cost of the optimal solution and epsilon is the smallest step cost –Can be much worse than breadth-first if many small steps not on optimal path Optimal? Yes – nodes expanded in increasing order of g(n)

28 28 CSC 8520 Spring 2010. Paula Matuszek Uniform Cost Search

29 29 CSC 8520 Spring 2010. Paula Matuszek Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

30 30 CSC 8520 Spring 2010. Paula Matuszek Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

31 31 CSC 8520 Spring 2010. Paula Matuszek Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

32 32 CSC 8520 Spring 2010. Paula Matuszek Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

33 33 CSC 8520 Spring 2010. Paula Matuszek Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

34 34 CSC 8520 Spring 2010. Paula Matuszek Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

35 35 CSC 8520 Spring 2010. Paula Matuszek Properties of depth-first search Complete? No: fails in infinite-depth spaces, spaces with loops –Modify to avoid repeated states along path  complete in finite spaces Time? O(b m ): terrible if m is much larger than d – but if solutions are dense, may be much faster than breadth-first Space? O(bm), i.e., linear space! Optimal? No

36 36 CSC 8520 Spring 2010. Paula Matuszek Depth-limited search = depth-first search with depth limit l Nodes at depth l have no successors Solves problem of infinite depth Incomplete Recursive implementation:

37 37 CSC 8520 Spring 2010. Paula Matuszek Iterative deepening search Repeated Depth-Limited search, incrementing limit l until a solution is found or failure. Repeats earlier steps at each new level, so inefficient -- but never more than doubles cost No longer incomplete

38 38 CSC 8520 Spring 2010. Paula Matuszek Iterative deepening search l =0

39 39 CSC 8520 Spring 2010. Paula Matuszek Iterative deepening search l =1

40 40 CSC 8520 Spring 2010. Paula Matuszek Iterative deepening search l =2

41 41 CSC 8520 Spring 2010. Paula Matuszek Iterative deepening search l =3

42 42 CSC 8520 Spring 2010. Paula Matuszek Properties of iterative deepening Complete? Yes Time? (d+1)b 0 + d b 1 + (d-1)b 2 + … + b d = O(b d ) Space? O(bd) Optimal? Yes, if step cost = 1

43 43 CSC 8520 Spring 2010. Paula Matuszek Summary of Algorithms for Ininformed Search CriterionBreadth- first Uniform Cost Depth FirstDepth- Limited Iterative Deepening Complete?Yes No Yes Time?O(b d+1 )O(b (ceilingC*/ epsilon) O(b m )O(b l )O(b d ) Space?O(b d+1 )O(b (ceilingC*/ epsilon) O(bm)O(bl)O(bd) Optimal?Yes No Yes

44 44 CSC 8520 Spring 2010. Paula Matuszek A Caution: Repeated States Failure to detect repeated states can turn a linear problem into an exponential one, or even an infinite one. –For example: 8-puzzle –Simple repeat -- empty square simply moves back and forth –More complex repeats also possible. Save list of expanded states -- the closed list. Add new state to fringe only if it's not in closed list.

45 45 CSC 8520 Spring 2010. Paula Matuszek Summary: Uninformed Search Problem formulation usually requires abstracting away real-world details to define a state space that can feasibly be explored Variety of uninformed search strategies Iterative deepening search uses only linear space and not much more time than other uninformed algorithms: usual choice

46 CSC 8520 Spring 2010. Paula Matuszek Informed search algorithms Slides derived in part from www.cs.berkeley.edu/~russell/slides/chapter04a.pdf, converted to powerpoint by Min-Yen Kan, National University of Singapore, and from www.cs.umbc.edu/671/fall03/slides/c5-6_inf_search.ppt, Marie DesJardins, University of Maryland Baltimore County.

47 47 CSC 8520 Spring 2010. Paula Matuszek Heuristic Search Uninformed search is generic; choice of node to expand is dependent on shape of tree and strategy for node expansion. Sometimes domain knowledge can help us make a better decision. For the Romania problem, eyeballing it results in looking at certain cities first because they "look closer" to where we are going. If that domain knowledge can be captured in a heuristic, search performance can be improved by using that heuristic. This gives us an informed search strategy.

48 48 CSC 8520 Spring 2010. Paula Matuszek So What's A Heuristic? Webster's Revised Unabridged Dictionary (1913) (web1913) Heuristic \Heu*ris"tic\, a. [Gr. ? to discover.] Serving to discover or find out. The Free On-line Dictionary of Computing (15Feb98) heuristic 1. A rule of thumb, simplification or educated guess that reduces or limits the search for solutions in domains that are difficult and poorly understood. Unlike algorithms, heuristics do not guarantee feasible solutions and are often used with no theoretical guarantee. 2. approximation algorithm. From WordNet (r) 1.6 heuristic adj 1: (computer science) relating to or using a heuristic rule 2: of or relating to a general formulation that serves to guide investigation [ant: algorithmic] n : a commonsense rule (or set of rules) intended to increase the probability of solving some problem [syn: heuristic rule, heuristic program]

49 49 CSC 8520 Spring 2010. Paula Matuszek Heuristics For search it has a very specific meaning: –All domain knowledge used in the search is encoded in the heuristic function h. Examples: –Missionaries and Cannibals: Number of people on starting river bank –8-puzzle: Number of tiles out of place –8-puzzle: Sum of distances from goal –Romania: straight-line distance from city to Bucharest In general: –h(n) >= 0 for all nodes n –h(n) = 0 implies that n is a goal node –h(n) = infinity implies that n is a dead end from which a goal cannot be reached h is some estimate of how desirable a move is, or how close it gets us to our goal

50 50 CSC 8520 Spring 2010. Paula Matuszek Best-first search Order nodes on the nodes list by increasing value of an evaluation function, f(n), that incorporates domain-specific information in some way. This is a generic way of referring to the class of informed methods.

51 51 CSC 8520 Spring 2010. Paula Matuszek Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation: Order the nodes in fringe in decreasing order of desirability Special cases: –greedy best-first search –A * search

52 52 CSC 8520 Spring 2010. Paula Matuszek Romania with step costs in km

53 53 CSC 8520 Spring 2010. Paula Matuszek Greedy best-first search Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goal e.g., h SLD (n) = straight-line distance from n to Bucharest Greedy best-first search expands the node that appears to be closest to goal

54 54 CSC 8520 Spring 2010. Paula Matuszek Greedy best-first search example

55 55 CSC 8520 Spring 2010. Paula Matuszek Greedy best-first search example

56 56 CSC 8520 Spring 2010. Paula Matuszek Greedy best-first search example

57 57 CSC 8520 Spring 2010. Paula Matuszek Greedy best-first search example

58 58 CSC 8520 Spring 2010. Paula Matuszek Properties of greedy best-first search Complete? No – can get stuck in loops, e.g., Iasi  Neamt  Iasi  Neamt  Time? O(b m ), but a good heuristic can give dramatic improvement Space? O(b m ) -- keeps all nodes in memory Optimal? No Remember: Time and space complexity are measured in terms of –b: maximum branching factor of the search tree –d: depth of the least-cost solution –m: maximum depth of the state space (may be infinite)

59 59 CSC 8520 Spring 2010. Paula Matuszek A * search Idea: avoid expanding paths that are already expensive Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to goal f(n) = estimated total cost of path through n to goal

60 60 CSC 8520 Spring 2010. Paula Matuszek A * search example

61 61 CSC 8520 Spring 2010. Paula Matuszek A * search example

62 62 CSC 8520 Spring 2010. Paula Matuszek A * search example

63 63 CSC 8520 Spring 2010. Paula Matuszek A * search example

64 64 CSC 8520 Spring 2010. Paula Matuszek A * search example

65 65 CSC 8520 Spring 2010. Paula Matuszek A * search example

66 66 CSC 8520 Spring 2010. Paula Matuszek Admissible heuristics A heuristic h(n) is admissible if for every node n, h(n) <= h * (n), where h * (n) is the true cost to reach the goal state from n. An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic. This means that we won't ignore a better path because we think the cost is too high. (If we underestimate it we wil learn that when we explore it.) Example: h SLD (n) (never overestimates the actual road distance)

67 67 CSC 8520 Spring 2010. Paula Matuszek Admissible heuristics E.g., for the 8-puzzle: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) h 1 (S) = ? h 2 (S) = ?

68 68 CSC 8520 Spring 2010. Paula Matuszek Admissible heuristics E.g., for the 8-puzzle: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) h 1 (S) = ? 8 h 2 (S) = ? 3+1+2+2+2+3+3+2 = 18

69 69 CSC 8520 Spring 2010. Paula Matuszek Properties of A* If h(n) is admissible Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) ) Time? Exponential in [relative error in h * length of solution] Space? Keeps all nodes in memory Optimal? Yes; cannot expand f i+1 until f i is finished.

70 70 CSC 8520 Spring 2010. Paula Matuszek Some observations on A* Perfect heuristic: If h(n) = h*(n) for all n, then only nodes on the optimal solution path will be expanded. So, no extra work will be performed. Null heuristic: If h(n) = 0 for all n, then this is an admissible heuristic and A* acts like Uniform-Cost Search. Better heuristic: If h1(n) < h2(n) <= h*(n) for all non-goal nodes, then h2 is a better heuristic than h1 –If A1* uses h1, and A2* uses h2, then every node expanded by A2* is also expanded by A1*. –In other words, A1 expands at least as many nodes as A2*. –We say that A2* is better informed than A1*, or A2* dominates A1* The closer h is to h*, the fewer extra nodes that will be expanded

71 71 CSC 8520 Spring 2010. Paula Matuszek What’s a good heuristic? How do we find one? If h1(n) < h2(n) <= h*(n) for all n, then both are admissible and h2 is better than (dominates) h1. Relaxing the problem: remove constraints to create a (much) easier problem; use the solution cost for this problem as the heuristic function Combining heuristics: take the max of several admissible heuristics: still have an admissible heuristic, and it’s better! Pattern databases: exact values for simpler subsets of the problem Identify good features, then use a learning algorithm to find a heuristic function: may lose admissibility

72 72 CSC 8520 Spring 2010. Paula Matuszek Relaxed problems A problem with fewer restrictions on the actions is called a relaxed problem The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h 1 (n) gives the shortest solution If the rules are relaxed so that a tile can move to any adjacent square, then h 2 (n) gives the shortest solution

73 73 CSC 8520 Spring 2010. Paula Matuszek Some Examples of Heuristics? 8-puzzle? Mapquest driving directions? Minesweeper? Crossword puzzle? Making a medical diagnosis? ??

74 74 CSC 8520 Spring 2010. Paula Matuszek Some Examples of Heuristics? 8-puzzle?: Manhattan distance Mapquest driving directions?: straight line distance Crossword puzzle? Making a medical diagnosis? ??

75 75 CSC 8520 Spring 2010. Paula Matuszek Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete" configurations Find configuration satisfying constraints, e.g., n- queens In such cases, we can use local search algorithms Keep a single "current" state, try to improve it

76 76 CSC 8520 Spring 2010. Paula Matuszek Example: n-queens Put n queens on an n x n board with no two queens on the same row, column, or diagonal

77 77 CSC 8520 Spring 2010. Paula Matuszek Hill-climbing search If there exists a successor s for the current state n such that –h(s) < h(n) –h(s) <= h(t) for all the successors t of n, then move from n to s. Otherwise, halt at n. Looks one step ahead to determine if any successor is better than the current state; if there is, move to the best successor. Similar to Greedy search in that it uses h, but does not allow backtracking or jumping to an alternative path since it doesn’t “remember” where it has been. Not complete since the search will terminate at "local minima," "plateaus," and "ridges."

78 78 CSC 8520 Spring 2010. Paula Matuszek Hill-climbing search "Like climbing Everest in thick fog with amnesia"

79 79 CSC 8520 Spring 2010. Paula Matuszek Hill climbing example 283 14 765 23 184 765 13 84 765 2 3 184 765 2 13 8 4 765 2 start goal -5 h = -3 h = -2 h = -1 h = 0 h = -4 -5 -4 -3 -2 f(n) = -(number of tiles out of place) 283 164 7 5 13 8 4 765 2

80 80 CSC 8520 Spring 2010. Paula Matuszek Drawbacks of hill climbing Problems: –Local Maxima: peaks that aren’t the highest point in the space –Plateaus: the space has a broad flat region that gives the search algorithm no direction (random walk) –Ridges: flat like a plateau, but with dropoffs to the sides; steps to the North, East, South and West may go down, but a step to the NW may go up. Remedy: –Random restart. Some problem spaces are great for hill climbing and others are terrible.

81 81 CSC 8520 Spring 2010. Paula Matuszek Hill-climbing search Problem: depending on initial state, can get stuck in local maxima

82 82 CSC 8520 Spring 2010. Paula Matuszek Example of a local maximum 125 74 863 125 74 863 125 74 863 125 74 863 125 74 863 -3 -4 0 startgoal

83 83 CSC 8520 Spring 2010. Paula Matuszek Simulated annealing Simulated annealing (SA) exploits an analogy between the way in which a metal cools and freezes into a minimum-energy crystalline structure (the annealing process) and the search for a minimum [or maximum] in a more general system. SA can avoid becoming trapped at local minima. SA uses a random search that accepts changes that increase objective function f, as well as some that decrease it. SA uses a control parameter T, which by analogy with the original application is known as the system “temperature.” T starts out high and gradually decreases toward 0.

84 84 CSC 8520 Spring 2010. Paula Matuszek Simulated annealing search Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency

85 85 CSC 8520 Spring 2010. Paula Matuszek Properties of simulated annealing search One can prove: If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1 Widely used in VLSI layout, airline scheduling, etc

86 86 CSC 8520 Spring 2010. Paula Matuszek Local beam search Keep track of k states rather than just one Start with k randomly generated states At each iteration, all the successors of all k states are generated and evaluated If any one is a goal state, stop; else select the k best successors from the complete list and repeat. Stochastic beam search chooses K successors as a weighted random function of their value.

87 87 CSC 8520 Spring 2010. Paula Matuszek Genetic Algorithms Variant of stochastic beam search where each successor is generated from two predecessor (parent) states. Each state (individual) is represented by a string over a finite alphabet -- typically binary. New individuals are created by: –choosing pairs of individuals by a weighted random function –for each pair, choosing a crossover point –creating a new individual from the head of one parent and the tail of the other, switching at the crossover point –randomly mutating some position with some probability

88 88 CSC 8520 Spring 2010. Paula Matuszek Example: 8 Queens An individual is represented by an 8- position vector describing the row of each queen 16247483

89 89 CSC 8520 Spring 2010. Paula Matuszek GA Continued Choosing –24748552 and –32752441 with a crossover at 3 gives us –32748552 with a mutation at 6 gives us –32748152

90 90 CSC 8520 Spring 2010. Paula Matuszek Genetic Algorithms, Continued The crossover operation gives “chunks” that may already be good a chance to stay together. A schema is a partial description of a state. For instance, 246***** describes 8-queen boards with the first three queens in rows 2, 4 and 6. If the fitness of a schema is above average it will become more frequent in the population

91 91 CSC 8520 Spring 2010. Paula Matuszek Genetic Algorithms Useful when: –We only care about the final state –Adjacent bits are somehow related to one another The representation and fitness function are both critical to a good GA solution Still a lot of research going on about when Genetic Algorithms are a good choice

92 92 CSC 8520 Spring 2010. Paula Matuszek Summary: Informed search Best-first search is general search where the minimum-cost nodes (according to some measure) are expanded first. Greedy search uses minimal estimated cost h(n) to the goal state as measure. This reduces search time, but is neither complete nor optimal. A* search combines uniform-cost search and greedy search: f(n) = g(n) + h(n). A* handles state repetitions and h(n) never overestimates. –A* is complete, optimal and optimally efficient, but its space complexity is still bad. –The time complexity depends on the quality of the heuristic function. Local Search techniques are useful when you don't care about path, only result. Examples include –Hill-climbing –Simulated annealing –Local Beam Search –Genetic Algorithms

93 93 CSC 8520 Spring 2010. Paula Matuszek Search vs Retrieval We are using search to mean finding a solution in a state space The colloquial use of the term search is more formally retrieval –there is a set of possible artifacts: the corpus –we want to find one or more relevant records/pages/facts in the corpus –Basic process is: index relevant dimensions (in a hash table, for instance) define a query in terms of those dimensions retrieve records that “match” the query

94 94 CSC 8520 Spring 2010. Paula Matuszek Search Summary For uninformed search, tradeoffs between time and space complexity, with iterative deepening often the best choice. For non-adversarial informed search, A* usually the best choice; the better the heuristic, the better the performance. The better we can capture domain knowledge in the heuristic function, the better we can do. Retrieval is a different paradigm from state space search.


Download ppt "CSC 8520 Spring 2010. Paula Matuszek CS 8520: Artificial Intelligence Solving Problems by Searching Paula Matuszek Spring, 2010 Slides based on Hwee Tou."

Similar presentations


Ads by Google