Presentation is loading. Please wait.

Presentation is loading. Please wait.

Machine Learning Chapter 5. Artificial IntelligenceChapter 52 Learning 1. Rote learning rote( โรท ) n. วิถีทาง, ทางเดิน, วิธีการตามปกติ, (by rote จากความทรงจำ.

Similar presentations


Presentation on theme: "Machine Learning Chapter 5. Artificial IntelligenceChapter 52 Learning 1. Rote learning rote( โรท ) n. วิถีทาง, ทางเดิน, วิธีการตามปกติ, (by rote จากความทรงจำ."— Presentation transcript:

1 Machine Learning Chapter 5

2 Artificial IntelligenceChapter 52 Learning 1. Rote learning rote( โรท ) n. วิถีทาง, ทางเดิน, วิธีการตามปกติ, (by rote จากความทรงจำ ), การท่องจำอย่างเดียว S. repetition 2. Learning by taking advice 3. Learning by problem solving Parameter adjustment Macro-Operators 4. Learning from examples Induction : Winston ’ s learning program p.458 Version Spaces : Candidate eliminate algorithm Decision tree 5. Explanation-based learning p 482 6. Formal learning theory

3 Artificial IntelligenceChapter 53 Winston ’ s learning program

4 Artificial IntelligenceChapter 54 Winston ’ s learning program Concept : P.459 Begin with a structural description of one known instance of the concept. Call the description the concept definition. Examine descriptions of other known instances of the concepts. Generalize the definition to include them. Examine descriptions of near misses of concept, Restrict the definition to exclude these.

5 Artificial IntelligenceChapter 55 HOUSE OF 17.2 ARCH OF 17.2

6 Artificial IntelligenceChapter 56 Winston ’ s learning program

7 Artificial IntelligenceChapter 57 Winston ’ s learning program

8 Artificial IntelligenceChapter 58 Winston ’ s learning program

9 Artificial IntelligenceChapter 59 Winston ’ s learning program p.458 Block world concept : Figure 17.2 p. 459 Structure description : Figure 17.3 p. 460 The comparison of two arches : Figure 17.4 p. 461 The arch description after two examples : Figure 17.5 p. 462 The arch “ description after a near miss : Figure 17.6 p. 463 use semantic networks to describe block structures use matching process to detect similarities and differences between structures use isa hierarchy to describe relationship among already known objects

10 Artificial IntelligenceChapter 510 Semantic Network isa

11 Artificial IntelligenceChapter 511 Semantic Network

12 Artificial IntelligenceChapter 512 Version Spaces The goal : to produce a description that is consistent with all positive examples but no negative examples in the training set. use frame representing concept for car see Figure 17.7 p. 463 Features/Slots : { value1, value2,...,valueN } origin : { Japan, USA, Britain } Variables : X1, X2, X3 concept space : see Figure 17.11 Concept of Version Spaces p. 466 variables target concept all training instance

13 Artificial IntelligenceChapter 513 Version Spaces

14 Artificial IntelligenceChapter 514 Version Spaces version space = current hypothesis = subset of concept space = largest collection of descriptions that is consistent with all the training examples seen so far. concept space = G or S G = contain the most general descriptions consistent with the training example seen so far. S = contain the most specific descriptions consistent with training examples positive example (+)  move S to more specific negative example (-)  move G to more specific if G and S sets converge  the hypothesis is a single concept description

15 Artificial IntelligenceChapter 515 Version Spaces Candidate Eliminate Algorithm p.466-467  algorithm that use to narrow down the version space  by remove any descriptions that are inconsistent with set G and set S Car Example Figure 17.7 Concept Car : p. 463 Figure 17.8 Representation language for car : p. 464 Figure 17.9 The concept Japanese car : p. 464 Figure 17.10 Partial ordering of concepts : p. 465 Figure 17.12 Positive and negative examples of car : p. 467

16 Artificial IntelligenceChapter 516 Version Spaces

17 Artificial IntelligenceChapter 517 Version Spaces

18 Artificial IntelligenceChapter 518

19 Artificial IntelligenceChapter 519 Version Spaces

20 Artificial IntelligenceChapter 520 Version Spaces

21 Artificial IntelligenceChapter 521 Candidate Eliminate Algorithm

22 Artificial IntelligenceChapter 522 Candidate Eliminate Algorithm

23 Artificial IntelligenceChapter 523 We want “ Japanese economy car ” From Figure 17.12 Positive and negative examples of car : p. 467 [origin = X1, manufacture = X2, color = X3, decade = X4, type = X5] GET EX1 (+) G = {(X1, X2, X3, X4, X5)} S = {(Japan,Honda, Blue,1980,Economy}) =Figure 17.12 in EX1 GET EX2 (-) G = {(X1, Honda, X3, X4, X5), (X1, X2, Blue, X4, X5), (X1, X2, X3, 1980, X5), (X1, X2, X3, X4, Economy)} S = {(Japan,Honda, Blue,1980,Economy}) ** the same because (-) example GET EX3 (+) check G first, G = {(X1, X2, Blue, X4, X5),(X1, X2, X3, X4, Economy)} S = {(Japan,X2, Blue,X4,Economy}) GET EX4 (-) check G first, G = {(Japan, X2, Blue, X4, X5), (Japan, X2, X3, X4, Economy)} S = {(Japan,X2, Blue,X4,Economy}) ** the same because (-) example GET EX5 (+) check G first, G = {(Japan, X2, X3, X4, Economy)} S = {(Japan,X2, X3,X4, Economy}) Version Spaces

24 Artificial IntelligenceChapter 524 Version Spaces Note : The algorithm is least commitment algorithm : produce as little as possible at each step Problems 1.) S and G may not converge to a single hypothesis 2. ) if there is a noise (inconsistent data)  the algorithm will be premature, we may prune the target concept too fast * For example if the data number three given the negative sign (-) instance of positive sign (+)... no matter how much the data is we can not find the concept.... * How to fix this problem is to maintain several G and S sets BUT it is costly and may have the bounded inconsistency problem 3.) We can not use OR in the questions ask * For example : Italian sport car or German luxury car ”

25 Artificial IntelligenceChapter 525 Decision Tree ID3 Program = to classify a particular input, we start at the top of the tree and answer questions until we reach a leaf, where the classification is stored. See Figure 17.13 Decision tree p. 470 1. Choose window = random subset of training examples to train 2. Outside window = use to test the decision tree 3. Use empirical evidence (iterative method) to build up decision tree 4. Building a node = choosing some attribute to divide training instance into subset consider (+) sign Can use with OR.... just change (-) sign into (+) sign Problems : noisy input, attribute value may be unknown, may have large decision tree and hard to understand relationship See Figure 17.13

26 Artificial IntelligenceChapter 526 Decision Tree

27 Artificial IntelligenceChapter 527 Explanation-Based Learning provide explanation depend on domain theory/ domain knowledge

28 Artificial IntelligenceChapter 528 Formal Learning Theory Given positive and negative examples produce algorithm that will classify future examples correctly with probability 1/h Complexity of learning : –the error tolerance (h) –the number of binary features present in the examples (t) –the size of the rule necessary to make the discrimination (f)

29 Artificial IntelligenceChapter 529 Formal Learning Theory if the number of training examples required is polynomial in h,t, and f  then the concept is learnable. few training examples are needed  learnable we restrict the learner to the positive examples only. See Figure 12.22 Concept of elephant P. 483 elephant = “ gray, mammal, large ”

30 Artificial IntelligenceChapter 530 Formal Learning Theory

31 Artificial IntelligenceChapter 531 Induction induction : A method of reasoning by which one infers a generalization from a series of instances. Inductive syllogisms are of the following form: 1. These beans are from this bag. (and these beans..., and these beans..., etc.) 2. These beans are (all) white. # 3 Therefore, all beans from this bag are white. In a much broader sense, induction can be thought to include various other forms of reasoning including reasoning, inference to cause form symptoms, and confirmation of laws and theories. 11 22  emphasis to all BEANS : all instances

32 Artificial IntelligenceChapter 532 Deduction deduction - A method of reasoning by which one infers a conclusion from a set of sentences by employing the axioms and rules of inference for a given logical system. Use the term 'deduction' in a general sense to denote the fact that a conclusion follows necessarily from the premises. Deductive syllogisms in quantificational predicate calculus are of the following form: 1. All beans from this bag are white.... 2. These beans are from this bag. #4 Therefore, these beans are white..... 11 22  emphasis to one BEAN : one instance

33 Artificial IntelligenceChapter 533 Abduction abduction -A method of reasoning by which one infers to the......best explanation..... - A heuristic procedure that reasons inductively from available empirical evidence to the discovery of the probable hypotheses that would best explain its occurrence. Abductive syllogisms are of the following form: #3 All beans from this bag are white #4 These beans are white.  emphasis to one BEANS

34 Artificial IntelligenceChapter 534 The End


Download ppt "Machine Learning Chapter 5. Artificial IntelligenceChapter 52 Learning 1. Rote learning rote( โรท ) n. วิถีทาง, ทางเดิน, วิธีการตามปกติ, (by rote จากความทรงจำ."

Similar presentations


Ads by Google