Download presentation

Presentation is loading. Please wait.

Published byBrice Carroll Modified about 1 year ago

1
Midterm Review Dr. Bernard Chen Ph.D. University of Central Arkansas Spring 2011

2
Outline Ch3 Structures and Strategies for State Space Search Ch4 Heuristic Search Ch5 Stochastic Search

3
Introduction to Representation The representation function is to capture the critical features of a problem and make that information accessible to a problem solving procedure Expressiveness (the result of the feature abstracted) and efficiency (the computational complexity) are major dimensions for evaluating knowledge representation

4
Introduction to Search Given a representation, the second component of intelligent problem solving is search Human generally consider a number of alternatives strategies on their way to solve a problem Such as chess Player reviews alternative moves, select the “best” move A player can also consider a short term gain

5
Introduction to Search We can represent this collection of possible moves by regarding each board as a state in a graph The link of the graph represent legal move The resulting structure is a state space graph

6
“tic-tac-toe” state space graph

7
State Space Representation State space search characterizes problem solving as the process of finding a solution path form the start state to a goal A goal may describe a state, such as winning board in tic-tac-toe

8
State Space Representation A goal in configuration in the 8-puzzle

9
State Space Representation The Traveling salesperson problem Suppose a salesperson has five cities to visit and then must return home The goal of the problem is to find the shortest path for the salesperson to travel

10
State Space Representation

11
BFS and DFS In addition to specifying a search direction (data-driven or goal-driven), a search algorithm must determine the order in which states are examined in the graph Two possibilities: Depth-first search Breadth-first search

12
8-puzzle BFS

13
8-puzzle DFS

14
Outline Ch3 Structures and Strategies for State Space Search Ch4 Heuristic Search Ch5 Stochastic Search

15
Introduction George Polya defines heuristic as: “the study of the methods and rules of discovery and invention” This meaning can be traced to the term’s Greek root, the verb eurisco, which means “I discover” When Archimedes emerged from his famous bath clutching the golden crown, he shouted “Eureka!!”, meaning I have found it IN AI, heuristics are formalized as Rules for choosing those branches in a state space that are most likely to lead to an acceptable problem solution

16
Introduction Consider heuristic in the game of tic-tac-toe A simple analysis put the total number of states for 9! Symmetry reduction decrease the search space Thus, there are not 9 but 3 initial moves: to a corner to the center of a side to the center of the grid

17
Introduction

18
Use of symmetry on the second level further reduces the number of path to 3* 12 * 7! A simple heuristic, can almost eliminate search entirely: we may move to the state in which X has the most winning opportunity In this case, X takes the center of the grid as the first step

19
Introduction

20

21
Hill-Climbing The simplest way to implement heuristic search is through a procedure called hill- climbing It expend the current state of the search and evaluate its children The Best child is selected for further expansion Neither it sibling nor its parent are retained Tic-Tac-Toe we just saw is an example

22
Dynamic Programming (DP) DP keeps track of and reuses of multiple interacting and interrelated subproblems An example might be reuse the subseries solutions within the solution of the Fibonacci series The technique of subproblem caching for reuse is sometimes called memorizing partial subgoal solutions

23
Dynamic Programming (DP) _BAADDCABDDA _ B B A D C B A

24
Dynamic Programming (DP) BAADDCABDDA BBA_DC_B_ _A

25
Best First Search For the 8-puzzle game, we may add 3 different types of information into the code: The simplest heuristic counts the tiles out of space in each state A “better” heuristic would sum all the distances by which the tiles are out of space

26
Best First Search

27

28

29
Minimax Procedure on Exhaustively Search Graphs Let’s consider a variant of the game nim To play this game, a number of tokens are placed on a table between the two players At each move, the player must divide a pile of tokens into two nonempty piles of different sizes Thus, 6 tokens my be divided into piles of 5&1 or 4&2 but not 3&3 The first player who can no longer make a move loses the game

30
Minimax Procedure on Exhaustively Search Graphs State space for a variant of nim. Each state partitions the seven matches into one or more piles.

31
Minimax Procedure on Exhaustively Search Graphs

32
Minimax propagates these values up the graph through successive parent nodes according to the rule: If the parent is a MAX node, give it the maximum value among its children If the parent is a MIN node, give it the minimum value among its children

33
Minimax Procedure on Exhaustively Search Graphs

34
Exercises Perform MINIMAX on the tree shown in Figure 4.30.

35
Exercises

36
Consider 3D tic-tac-toe. How to represent state search space? Analysis the complexity of the state space? Propose a heuristic for playing this game

37
Outline Ch3 Structures and Strategies for State Space Search Ch4 Heuristic Search Ch5 Stochastic Search

38
Bayes’ Theorem P(A), P(B) is the prior probabilityprior probability P(A|B) is the conditional probability of A, given B.conditional probability P(B|A) is the conditional probability of B, given A.conditional probability

39
Exercises Suppose an automobile insurance company classifies a driver as good, average, or bad. Of all their insured drivers, 25% are classified good, 50% are average, and 25% are bad. Suppose for the coming year, a good driver has a 5% chance of having an accident, and average driver has 15% chance of having an accident, and a bad driver has a 25% chance. If John had an accident in the past year what is the probability that John are a good driver?

40
Exercises

41
Naïve Bayesian Classifier: Training Dataset Class: C1:buys_computer = ‘yes’ C2:buys_computer = ‘no’ Data sample X = (age <=30, Income = medium, Student = yes Credit_rating = Fair)

42
Naïve Bayesian Classifier: An Example P(C i ): P(buys_computer = “yes”) = 9/14 = P(buys_computer = “no”) = 5/14= Compute P(X|C i ) for each class P(age = “<=30” | buys_computer = “yes”) = 2/9 = P(age = “<= 30” | buys_computer = “no”) = 3/5 = 0.6 P(income = “medium” | buys_computer = “yes”) = 4/9 = P(income = “medium” | buys_computer = “no”) = 2/5 = 0.4 P(student = “yes” | buys_computer = “yes) = 6/9 = P(student = “yes” | buys_computer = “no”) = 1/5 = 0.2 P(credit_rating = “fair” | buys_computer = “yes”) = 6/9 = P(credit_rating = “fair” | buys_computer = “no”) = 2/5 = 0.4

43
Naïve Bayesian Classifier: An Example X = (age <= 30, income = medium, student = yes, credit_rating = fair) P(X|C i ) : P(X|buys_computer = “yes”) = x x x = P(X|buys_computer = “no”) = 0.6 x 0.4 x 0.2 x 0.4 = P(X|C i )*P(C i ) : P(X|buys_computer = “yes”) * P(buys_computer = “yes”) = P(X|buys_computer = “no”) * P(buys_computer = “no”) = Therefore, X belongs to class (“buys_computer = yes”)

44
Naïve Bayesian Classifier: An Example Test on the following example: X = (age > 30, Income = Low, Student = yes Credit_rating = Excellent)

45
So how is “Tomato” pronounced A probabilistic finite state acceptor for the pronunciation of “tomato”, adapted from Jurafsky and Martin (2000).

46
Outline Expert System introduction Rule-Based Expert System Goal Driven Approach Data Driven Approach Model-Based Expert System

47
The Design of Rule-Based Expert System architecture of a typical expert system for a particular problem domain.

48
Strategies for state space search In data driven search, also called forward chaining, the problem solver begins with the given facts of the problem and set of legal moves for changing state This process continues until (we hope!!) it generates a path that satisfies the goal condition

49
Strategies for state space search An alternative approach (Goal Driven) is start with the goal that we want to solve See what rules can generate this goal and determine what conditions must be true to use them These conditions become the new goals Working backward through successive subgoals until (we hope again!) it work back to

50
A unreal Expert System Example Rule 1:if the engine is getting gas, and the engine will turn over, then the problem is spark plugs. Rule 2:if the engine does not turn over, and the lights do not come on then the problem is battery or cables. Rule 3:if the engine does not turn over, and the lights do come on then the problem is the starter motor. Rule 4:if there is gas in the fuel tank, and there is gas in the carburetor then the engine is getting gas.

51
The production system at the start of a consultation in the car diagnostic example.

52
The production system after Rule 1 has fired.

53
The system after Rule 4 has fired. Note the stack-based approach to goal reduction.

54
The and/or graph searched in the car diagnosis example, with the conclusion of Rule 4 matching the first premise of Rule 1.

55
Data-Driven Reasoning

56
The production system after evaluating the first premise of Rule 2, which then fails.

57
The data-driven production system after considering Rule 4, beginning its second pass through the rules.

58
Model-Based Expert System A more robust, deeply explanatory approach would begin with a detailed model of the physical structure of the circuit and equations describing the expected behavior of each component and their interactions. A knowledge based reasoner whose analysis is founded directly on the specification and functionality of a physical system is called a MODEL-BASED System

59
NASA Example

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google