Presentation is loading. Please wait.

Presentation is loading. Please wait.

Cooperating Intelligent Systems

Similar presentations


Presentation on theme: "Cooperating Intelligent Systems"— Presentation transcript:

1 Cooperating Intelligent Systems
Learning from observations Chapter 18, AIMA Machine Learning

2 Two types of learning in AI
Deductive: Deduce rules/facts from already known rules/facts. (We have already dealt with this) Inductive: Learn new rules/facts from a data set D. We will be dealing with the latter, inductive learning, now

3 Two types of inductive learning
Supervised: The machine has access to a teacher who corrects it. Unsupervised: No access to teacher. Instead, the machine must search for “order” and “structure” in the environment.

4 Two tracks Regression: Learning function values
Classification: Learning categories

5 Inductive learning - example A
Etc... f(x) is the target function An example is a pair [x, f(x)] Learning task: find a hypothesis h such that h(x)  f(x) given a training set of examples D = {[xi, f(xi) ]}, i = 1,2,…,N Inspired by a slide from V. Pavlovic

6 Inductive learning – example B
Inconsistent linear fit. Consistent 6th order polynomial fit. Consistent sinusoidal fit Consistent linear fit Consistent 7th order polynomial fit Construct h so that it agrees with f. The hypothesis h is consistent if it agrees with f on all observations. Ockham’s razor: Select the simplest consistent hypothesis. How achieve good generalization?

7 Inductive learning – example C
y Example from V. Rutgers

8 Inductive learning – example C
y Example from V. Rutgers

9 Inductive learning – example C
y Example from V. Rutgers

10 Inductive learning – example C
Sometimes a consistent hypothesis is worse than an inconsistent x y Example from V. Rutgers

11 The idealized inductive learning problem
Find appropriate hypothesis space H and find h(x)  H with minimum “distance” to f(x) (“error”) H f(x) hopt(x)  H Error Our hypothesis space The learning problem is realizable if f(x) ∈ H.

12 The real inductive learning problem
Find appropriate hypothesis space H and minimize the expected distance to f(x) (“generalization error”) H {f(x)} {hopt(x)} Egen Data is never noise free and never available in infinite amounts, so we get variation in data and model. The generalization error is a function of both the training data and the hypothesis selection method.

13 Hypothesis spaces (examples)
f(x) = x + x2 + 6x3 H2 H3 H1  H2  H3 H1={a+bx}; H2={a+bx+cx2}; H3={a+bx+cx2+dx3}; Linear; Quadratic; Cubic;

14 Learning problems The hypothesis takes as input a set of attributes x and returns a ”decision” h(x) = the predicted (estimated) output value for the input x. Discrete valued function ⇒ classification Continuous valued function ⇒ regression

15 Classification Order into one out of several classes Input space
Output (category) space

16 Example: Robot color vision
Classify the Lego pieces into red, blue, and yellow. Classify white balls, black sideboard, and green carpet. Input = pixel in image, output = category

17 The “fixed regressor model”
Regression The “fixed regressor model” x Observed input f(x) Observed output g(x) True underlying function e I.I.D noise process with zero mean

18 Example: Predict price for cotton futures
Input: Past history of closing prices, and trading volume Output: Predicted closing price

19 Now... let’s look at a classification problem: predicting whether a certain person will choose a particular restaurant.

20 Method: Decision trees
“Divide and conquer”: Split data into smaller and smaller subsets. Splits usually on a single variable x1 > a ? no yes x2 > b ? x2 > g ? no yes no yes

21 The wait@restaurant decision tree
This is our true function. Can we learn this tree from examples?

22 Inductive learning of decision tree
Simplest: Construct a decision tree with one leaf for every example = memory based learning. Not very good generalization. Advanced: Split on each variable so that the purity of each split increases (i.e. either only yes or only no) Purity measured,e.g, with entropy

23 Inductive learning of decision tree
Simplest: Construct a decision tree with one leaf for every example = memory based learning. Not very good generalization. Advanced: Split on each variable so that the purity of each split increases (i.e. either only yes or only no) Purity measured,e.g, with entropy

24 Inductive learning of decision tree
Simplest: Construct a decision tree with one leaf for every example = memory based learning. Not very good generalization. Advanced: Split on each variable so that the purity of each split increases (i.e. either only yes or only no) Purity measured,e.g, with entropy General form:

25 The entropy is maximal when all possibilities are equally likely.
The goal of the decision tree is to decrease the entropy in each node. Entropy is zero in a pure ”yes” node (or pure ”no” node). Entropy is a measure of ”order” in a system. The second law of thermodynamics: Elements in a closed system tend to seek their most probable distribution; in a closed system entropy always increases

26 Decision tree learning algorithm
Create pure nodes whenever possible If pure nodes are not possible, choose the split that leads to the largest decrease in entropy.

27 Decision tree learning example
10 attributes: Alternate: Is there a suitable alternative restaurant nearby? {yes,no} Bar: Is there a bar to wait in? {yes,no} Fri/Sat: Is it Friday or Saturday? {yes,no} Hungry: Are you hungry? {yes,no} Patrons: How many are seated in the restaurant? {none, some, full} Price: Price level {$,$$,$$$} Raining: Is it raining? {yes,no} Reservation: Did you make a reservation? {yes,no} Type: Type of food {French,Italian,Thai,Burger} Wait: {0-10 min, min, min, >60 min}

28 Decision tree learning example
T = True, F = False 6 True, 6 False

29 Decision tree learning example
Alternate? Yes No 3 T, 3 F 3 T, 3 F Entropy decrease = 0.30 – 0.30 = 0

30 Decision tree learning example
Bar? Yes No 3 T, 3 F 3 T, 3 F Entropy decrease = 0.30 – 0.30 = 0

31 Decision tree learning example
Sat/Fri? Yes No 2 T, 3 F 4 T, 3 F Entropy decrease = 0.30 – 0.29 = 0.01

32 Decision tree learning example
Hungry? Yes No 5 T, 2 F 1 T, 4 F Entropy decrease = 0.30 – 0.24 = 0.06

33 Decision tree learning example
Raining? Yes No 2 T, 2 F 4 T, 4 F Entropy decrease = 0.30 – 0.30 = 0

34 Decision tree learning example
Reservation? Yes No 3 T, 2 F 3 T, 4 F Entropy decrease = 0.30 – 0.29 = 0.01

35 Decision tree learning example
Patrons? None Full Some 2 F 2 T, 4 F 4 T Entropy decrease = 0.30 – 0.14 = 0.16

36 Decision tree learning example
Price $ $$$ $$ 3 T, 3 F 1 T, 3 F 2 T Entropy decrease = 0.30 – 0.23 = 0.07

37 Decision tree learning example
Type French Burger 1 T, 1 F 2 T, 2 F Italian Thai 1 T, 1 F 2 T, 2 F Entropy decrease = 0.30 – 0.30 = 0

38 Decision tree learning example
Est. waiting time 0-10 > 60 4 T, 2 F 2 F 10-30 30-60 1 T, 1 F 1 T, 1 F Entropy decrease = 0.30 – 0.24 = 0.06

39 Decision tree learning example
Largest entropy decrease (0.16) achieved by splitting on Patrons. Patrons? None Full Some 2 F X? 2 T, 4 F Continue like this, making new splits, always purifying nodes. 4 T

40 Decision tree learning example
Induced tree (from examples)

41 Decision tree learning example
True tree

42 Decision tree learning example
Induced tree (from examples) Cannot make it more complex than what the data supports.

43 How do we know it is correct?
How do we know that h  f ? (Hume's Problem of Induction) Try h on a new test set of examples (cross validation) ...and assume the ”principle of uniformity”, i.e. the result we get on this test data should be indicative of results on future data. Causality is constant. Inspired by a slide by V. Pavlovic

44 Learning curve for the decision tree algorithm on 100 randomly generated examples in the restaurant domain. The graph summarizes 20 trials.

45 Cross-validation Use a “validation set”. Split your data set into two
parts, one for training your model and the other for validating your model. The error on the validation data is called “validation error” (Eval) Dtrain Dval Eval

46 K-Fold Cross-validation
More accurate than using only one validation set. Dtrain Dval Dtrain Dval Dtrain Dval Dtrain Eval(1) Eval(3) Eval(2)

47 PAC Any hypothesis that is consistent with a sufficiently large set of training (and test) examples is unlikely to be seriously wrong; it is probably approximately correct (PAC). What is the relationship between the generalization error and the number of samples needed to achieve this generalization error?

48 The error h f f and h disagree instance space X
X = the set of all possible examples (instance space). D = the distribution of these examples. H = the hypothesis space (h  H). N = the number of training data. instance space X f h f and h disagree Image adapted from F. KTH

49 Probability for bad hypothesis
Suppose we have a bad hypothesis h with error(h) > e. What is the probability that it is consistent with N samples? Probability for being inconsistent with one sample = error(h) > e. Probability for being consistent with one sample = 1 – error(h) < 1 – e. Probability for being consistent with N independently drawn samples < (1 – e)N.

50 Probability for bad hypothesis
What is the probability that the set Hbad of bad hypotheses with error(h) > e contains a consistent hypothesis?

51 Probability for bad hypothesis
What is the probability that the set Hbad of bad hypotheses with error(h) > e contains a consistent hypothesis? If we want this to be less than some constant d, then

52 Probability for bad hypothesis
What is the probability that the set Hbad of bad hypotheses with error(h) > e contains a consistent hypothesis? If we want this to be less than some constant d, then Don’t expect to learn very well if H is large

53 How make learning work? Use simple hypotheses Constrain H with priors
Always start with the simple ones first Constrain H with priors Do we know something about the domain? Do we have reasonable a priori beliefs on parameters? Use many observations Easy to say... Cross-validation...


Download ppt "Cooperating Intelligent Systems"

Similar presentations


Ads by Google