Presentation is loading. Please wait.

Presentation is loading. Please wait.

 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 10.

Similar presentations


Presentation on theme: " 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 10."— Presentation transcript:

1  2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 10 Multistrategy Learning

2  2003, G.Tecuci, Learning Agents Laboratory 2 Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Basic references Guiding Induction by Domain Theory Plausible Justification Trees Research Issues

3  2003, G.Tecuci, Learning Agents Laboratory 3 Multistrategy learning Multistrategy learning is concerned with developing learning agents that synergistically integrate two or more learning strategies in order to solve learning tasks that are beyond the capabilities of the individual learning strategies that are integrated.

4  2003, G.Tecuci, Learning Agents Laboratory 4 Examples Learning from examples Explanation- based learning Multistrategy Knowledge many one several learning needed Effect on agent's behavior improves competence improves efficiency improves competence or/ and efficiency Type of inference induction deduction induction and/ or deduction Complementariness of learning strategies completeincomplete knowledge very little knowledge needed Case Study: Inductive Learning vs Explanation-based Learning

5  2003, G.Tecuci, Learning Agents Laboratory 5 Multistrategy concept learning Input Background Knowledge (Domain Theory) Goal The Learning Problem One or more positive and/or negative examples of a concept Weak, incomplete, partially incorrect, or complete Learn a concept description characterizing the example(s) and consistent with the background knowledge by combining several learning strategies

6  2003, G.Tecuci, Learning Agents Laboratory 6 Multistrategy knowledge base refinement The Learning Problem: Improve the knowledge base so that the Inference Engine solves (classifies) correctly the training examples. Similar names:background knowledge – domain theory – knowledge base knowledge base refinement - theory revision Training Knowledge Knowledge Base Refinement Multistrategy Examples Base (DT) Inference Engine Knowledge Base (DT) Inference Engine Improved

7  2003, G.Tecuci, Learning Agents Laboratory 7 Types of theory errors (in a rule based system)  graspable(x) has-handle(x) Incorrect KB (theory) Overly Specific Overly General Missing Rule Extra Rule Missing Premise Additional Premise shape(x, round)  graspable(x) insulating(x)  graspable(x) width(x, small)& insulating(x)& shape(x, round)  graspable(x) width(x, small) What is the effect of each error on the system’s ability to classify graspable objects, or other objects that need to be graspable, such as cups? Proofs for some positive examples cannot be built: Proofs for some negative examples can be built: Positive examples that are not round, or have a handle Negative examples that are round, or are insulating but not small How would you call a KB where some positive examples are not explained (classified as positive)? How would you call a KB where some negative examples are wrongly explained (classified as positive)? _ Positive examples Negative examples

8  2003, G.Tecuci, Learning Agents Laboratory 8 Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Basic references Guiding Induction by Domain Theory Plausible Justification Trees Research Issues

9  2003, G.Tecuci, Learning Agents Laboratory 9 EBL-VS: Combining EBL with Version Spaces Apply explanation-based learning to generalize the positive and the negative examples. Replace each example that has been generalized with its generalization. Apply the version space method to the new set of examples. Produce an abstract illustration of this algorithm.

10  2003, G.Tecuci, Learning Agents Laboratory 10 EBL-VS features Learns from positive and negative examples Justify the following feature, considering several cases: Apply explanation-based learning to generalize the positive and the negative examples. Replace each example that has been generalized with its generalization. Apply the version space method to the new set of examples.

11  2003, G.Tecuci, Learning Agents Laboratory 11 EBL-VS features Justify the following feature: Apply explanation-based learning to generalize the positive and the negative examples. Replace each example that has been generalized with its generalization. Apply the version space method to the new set of examples. Can learn with an incomplete background knowledge

12  2003, G.Tecuci, Learning Agents Laboratory 12 EBL-VS features Justify the following feature: Apply explanation-based learning to generalize the positive and the negative examples. Replace each example that has been generalized with its generalization. Apply the version space method to the new set of examples. Can learn with different amounts of knowledge, from knowledge-free to knowledge-rich

13  2003, G.Tecuci, Learning Agents Laboratory 13 EBL-VS features summary and references Learns from positive and negative examples Can learn with an incomplete background knowledge Can learn with different amounts of knowledge, from knowledge-free to knowledge-rich References Hirsh, H., "Combining Empirical and Analytical Learning with Version Spaces," in Proc. of the Sixth International Workshop on Machine Learning, A. M. Segre (Ed.), Cornell University, Ithaca, New York, June 26-27, Hirsh, H., "Incremental Version-space Merging," in Proceedings of the 7th International Machine Learning Conference, B.W. Porter and R.J. Mooney (Eds.), Austin, TX, 1990.

14  2003, G.Tecuci, Learning Agents Laboratory 14 Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Basic references Guiding Induction by Domain Theory Plausible Justification Trees Research Issues

15  2003, G.Tecuci, Learning Agents Laboratory 15 IOU: Induction Over Unexplained Limitation of EBL-VS Assumes that at least one generalization of an example is correct and complete IOU Knowledge base could be incomplete but correct: - the explanation-based generalization of an example may be incomplete; - the knowledge base may explain negative examples. Learns concepts with both explainable and conventional aspects Justify the following limitation of EBL-VS:

16  2003, G.Tecuci, Learning Agents Laboratory 16 IOU method Apply explanation-based learning to generalize each positive example Disjunctively combine these generalizations (this is the explanatory component C e ) Disregard negative examples not satisfying C e and remove the features mentioned in C e from all the examples Apply empirical inductive learning to determine a generalization of the reduced set of simplified examples (this is the non-explanatory component C n ) The learned concept is C e & C n

17  2003, G.Tecuci, Learning Agents Laboratory 17 IOU: illustration Positive examples of cups: Cup1, Cup2 Negative examples: Shot-Glass1, Mug1, Can1 Domain Theory: incomplete - contains a definition of a generalization of the concept to be learned (e.g. contains a definition of drinking vessels but no definition of cups) C e = has-flat-bottom(x) & light(x) & up-concave(x) & {[width(x,small) & insulating(x)]  has-handle(x)} C e covers Cup1, Cup2, Shot-Glass1, Mug1 but not Can1 C n = volume(x,small) C n covers Cup1, Cup2 but not Shot-Glass1, Mug1 C = C e & C n Mooney, R.J. and Ourston, D., "Induction Over Unexplained: Integrated Learning of Concepts with Both Explainable and Conventional Aspects,", in Proc. of the Sixth International Workshop on Machine Learning, A.M. Segre (Ed.), Cornell University, Ithaca, New York, June 26-27, 1989.

18  2003, G.Tecuci, Learning Agents Laboratory 18 Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Basic references Guiding Induction by Domain Theory Plausible Justification Trees Research Issues

19  2003, G.Tecuci, Learning Agents Laboratory 19 Enigma: Guiding Induction by Domain Theory Limitations of IOU Knowledge base rules have to be correct Examples have to be noise-free ENIGMA Knowledge base rules could be partially incorrect Examples may be noisy Justify the following limitations of IOU:

20  2003, G.Tecuci, Learning Agents Laboratory 20 Enigma: method Trades-off the use of knowledge base rules against the coverage of examples: Successively specialize the abstract definition D of the concept to be learned by applying KB rules Whenever a specialization of the definition D contains operational predicates, compare it with the examples to identify the covered and the uncovered ones Decide between performing: - a KB-based deductive specialization of D - an example-based inductive modification of D The learned concept is a disjunction of leaves of the specialization tree built.

21  2003, G.Tecuci, Learning Agents Laboratory 21 Enigma: illustration Examples (4 positive, 4 negative) Positive example4 (p4): light(o4) & support(o4,b) & body(o4,a) & above(a,b) & up-concave(o4)  Cup(o4) Background Knowledge Liftable(x) & Stable(x) & Open-vessel(x)  Cup(x) light(x) & has-handle(x)  Liftable(x) has-flat-bottom(x)  Stable(x) body(x, y) & support(x, z) & above(y, z)  Stable(x) up-concave(x)  Open-vessel(x) KB: - partly overly specific (explains only p1 and p2) - partly overly general (explains n3) Operational predicates start with a lower-case letter

22  2003, G.Tecuci, Learning Agents Laboratory 22 Enigma: illustration (cont.) (to cover p3,p4) (to uncover n2,n3) Classification is based only on operational features:

23  2003, G.Tecuci, Learning Agents Laboratory 23 Learned concept light(x) & has-flat-bottom(x) &has-small-bottom(x)  Cup(x) Covers p1, p3 light(x) & body(x, y) & support(x, z) & above(y, z) & up-concave(x)  Cup(x) Covers p2, p4

24  2003, G.Tecuci, Learning Agents Laboratory 24 Application Diagnosis of faults in electro-mechanical devices through an analysis of their vibrations 209 examples and 6 classes Typical example: 20 to 60 noisy measurements taken in different points and conditions of the device A learned rule: IFthe shaft rotating frequency is w0 and the harmonic at w0 has high intensity and the harmonic at 2w0 has high intensity in at least two measurements THENthe example is an instance of C1 (problems in the joint), C4 (basement distortion) or C5 (unbalance)

25  2003, G.Tecuci, Learning Agents Laboratory 25 Application (cont.) Comparison between the KB learned by ENIGMA and the hand-coded KB of the expert system MEPS Bergadano, F., Giordana, A. and Saitta, L., "Automated Concept Acquisition in Noisy Environments," IEEE Transactions on Pattern Analysis and Machine Intelligence, 10 (4), pp , Bergadano, F., Giordana, A., Saitta, L., De Marchi D. and Brancadori, F., "Integrated Learning in a Real Domain," in B.W. Porter and R.J. Mooney (Eds. ), Proceedings of the 7th International Machine Learning Conference, Austin, TX, Bergadano, F. and Giordana, A., "Guiding Induction with Domain Theories," in Machine Learning: An Artificial Intelligence Approach Vollume 3, Y. Kodratoff and R.S. Michalski (Eds.), San Mateo, CA, Morgan Kaufmann, 1990.

26  2003, G.Tecuci, Learning Agents Laboratory 26 Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Basic references Guiding Induction by Domain Theory Plausible Justification Trees Research Issues

27  2003, G.Tecuci, Learning Agents Laboratory 27 MTL-JT: Multistrategy Task-adaptive Learning based on Plausible Justification Trees MTL-JT: Multistrategy Task-adaptive Learning based on Plausible Justification Trees Deep integration of learning strategies Integration of the elementary inferences that are employed by the single- strategy learning methods (e.g. deduction, analogy, empirical inductive prediction, abduction, deductive generalization, inductive generalization, inductive specialization, analogy-based generalization). Dynamic integration of learning strategies The order and the type of the integrated strategies depend of the relationship between the input information, the background knowledge and the learning goal. Different types of input (e.g. facts, concept examples, problem solving episodes) Different types of knowledge pieces in the knowledge base (e.g. facts, examples, implicative relationships, plausible determinations)

28  2003, G.Tecuci, Learning Agents Laboratory 28 MTL-JT: assumptions Input: correct (noise free) one or several examples, facts, or problem solving episodes Knowledge Base: incomplete and/or partially incorrect may include a variety of knowledge types (facts, examples, implicative or causal relationships, hierarchies, etc.) Learning Goal: extend, update and/or improve the knowledge base so as to integrate new input information

29  2003, G.Tecuci, Learning Agents Laboratory 29 Plausible justification tree A plausible justification tree is like a proof tree, except that some of individual inference steps are deductive, while others are non- deductive or only plausible (e.g. analogical, abductive, inductive).

30  2003, G.Tecuci, Learning Agents Laboratory 30 Learning method For the first positive example I1: - build a plausible justification tree T of I1 - build the plausible generalization Tu of T - generalize the KB to entail Tu For each new positive example Ii: - generalize Tu so as to cover a plausible justification tree of Ii - generalize the KB to entail the new Tu For each new negative example Ii: - specialize Tu so as not to cover any plausible justification of Ii - specialize the KB to entail the new Tu without entailing the previous Tu Learn different concept definitions: - extract different concept definitions from the general justification tree Tu

31  2003, G.Tecuci, Learning Agents Laboratory 31 Facts: terrain(Philippines, flat), rainfall(Philippines, heavy), water-in-soil(Philippines, high) Examples (of fertile soil): soil(Greece, red-soil)  soil(Greece, fertile-soil) terrain(Egypt, flat) & soil(Egypt, red-soil)  soil(Egypt, fertile-soil) Plausible determination: rainfall(x, y) >= water-in-soil(x, z) Deductive rules: soil(x, loamy)  soil(x, fertile-soil) climate(x, subtropical)  temperature(x, warm) climate(x, tropical)  temperature(x, warm) water-in-soil(x, high) & temperature(x, warm) & soil(x, fertile-soil)  grows(x, rice) MTL-JT: illustration from Geography Knowledge Base

32  2003, G.Tecuci, Learning Agents Laboratory 32 Positive and negative examples of "grows(x, rice)" Positive Example 1: rainfall(Thailand, heavy) & climate(Thailand, tropical) & soil(Thailand, red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia)  grows(Thailand, rice) Positive Example 2: rainfall(Pakistan, heavy) & climate(Pakistan, subtropical) & soil(Pakistan, loamy) & terrain(Pakistan, flat) & location(Pakistan, SW-Asia)  grows(Pakistan, rice) Negative Example 3: rainfall(Jamaica, heavy) & climate(Jamaica, tropical) & soil(Jamaica, loamy) & terrain(Jamaica, abrupt) & location(Jamaica, Central-America)  ¬ grows(Jamaica, rice)

33  2003, G.Tecuci, Learning Agents Laboratory 33 Build a plausible justification of the first example Example 1: rainfall(Thailand, heavy) & soil(Thailand, red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia) & climate(Thailand, tropical)  grows(Thailand, rice)

34  2003, G.Tecuci, Learning Agents Laboratory 34 Build a plausible justification of the first example Example 1: rainfall(Thailand, heavy) & soil(Thailand, red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia) & climate(Thailand, tropical)  grows(Thailand, rice) Justify the inferences from the above tree: analogy Facts: terrain(Philippines, flat), rainfall(Philippines, heavy), water-in-soil(Philippines, high) Plausible determination: rainfall(x, y) >= water-in-soil(x, z)

35  2003, G.Tecuci, Learning Agents Laboratory 35 Build a plausible justification of the first example Example 1: rainfall(Thailand, heavy) & soil(Thailand, red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia) & climate(Thailand, tropical)  grows(Thailand, rice) Justify the inferences from the above tree: deduction Deductive rules: soil(x, loamy)  soil(x, fertile-soil) climate(x, subtropical)  temperature(x, warm) climate(x, tropical)  temperature(x, warm) water-in-soil(x, high) & temperature(x, warm) & soil(x, fertile-soil)  grows(x, rice)

36  2003, G.Tecuci, Learning Agents Laboratory 36 Build a plausible justification of the first example Example 1: rainfall(Thailand, heavy) & soil(Thailand, red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia) & climate(Thailand, tropical)  grows(Thailand, rice) Justify the inferences from the above tree: inductive prediction & abduction Examples (of fertile soil): soil(Greece, red-soil)  soil(Greece, fertile-soil) terrain(Egypt, flat) & soil(Egypt, red-soil)  soil(Egypt, fertile-soil)

37  2003, G.Tecuci, Learning Agents Laboratory 37 Multitype generalization

38  2003, G.Tecuci, Learning Agents Laboratory 38 Multitype generalization Justify the generalizations from the above tree: generalization based on analogy Facts: terrain(Philippines, flat), rainfall(Philippines, heavy), water-in-soil(Philippines, high) Plausible determination: rainfall(x, y) >= water-in-soil(x, z)

39  2003, G.Tecuci, Learning Agents Laboratory 39 Multitype generalization Justify the generalizations from the above tree: Inductive generalization Examples (of fertile soil): soil(Greece, red-soil)  soil(Greece, fertile-soil) terrain(Egypt, flat) & soil(Egypt, red-soil)  soil(Egypt, fertile-soil)

40  2003, G.Tecuci, Learning Agents Laboratory 40 Build the plausible generalization Tu of T

41  2003, G.Tecuci, Learning Agents Laboratory 41 Positive example 2 Instance of the current Tu corresponding to Example 2 Plausible justification tree T2 of Example 2:

42  2003, G.Tecuci, Learning Agents Laboratory 42 Positive example 2 The explanation structure S 2 : The new Tu:

43  2003, G.Tecuci, Learning Agents Laboratory 43 Negative example 3 Instance of Tu corresponding to the Negative Example 3: The new Tu:

44  2003, G.Tecuci, Learning Agents Laboratory 44 The plausible generalization tree corresponding to the three input examples The plausible generalization tree corresponding to the three input examples

45  2003, G.Tecuci, Learning Agents Laboratory 45 Learned knowledge New facts: water-in-soil(Thailand, high) water-in-soil(Pakistan, high) Why is it reasonable to consider these facts to be true?

46  2003, G.Tecuci, Learning Agents Laboratory 46 Learned knowledge Examples (of fertile soil): soil(Greece, red-soil)  soil(Greece, fertile-soil) terrain(Egypt, flat) & soil(Egypt, red-soil)  soil(Egypt, fertile-soil) New plausible rule: soil(x, red-soil)  soil(x, fertile-soil)

47  2003, G.Tecuci, Learning Agents Laboratory 47 Facts: terrain(Philippines, flat), rainfall(Philippines, heavy), water-in-soil(Philippines, high) Learned knowledge Specialized plausible determination: rainfall(x, y) & terrain(x, flat) >= water-in-soil(x, z) Positive Example 1: rainfall(Thailand, heavy) & climate(Thailand, tropical) & soil(Thailand, red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia)  grows(Thailand, rice) Positive Example 2: rainfall(Pakistan, heavy) & climate(Pakistan, subtropical) & soil(Pakistan, loamy) & terrain(Pakistan, flat) & location(Pakistan, SW-Asia)  grows(Pakistan, rice) Negative Example 3: rainfall(Jamaica, heavy) & climate(Jamaica, tropical) & soil(Jamaica, loamy) & terrain(Jamaica, abrupt) & location(Jamaica, Central-America)  ¬ grows(Jamaica, rice)

48  2003, G.Tecuci, Learning Agents Laboratory 48 Learned knowledge: concept definitions Operational definition of "grows(x, rice)": rainfall(x,heavy) & terrain(x,flat) & [climate(x,tropical)  climate(x,subtropical)] & [soil(x,red-soil)  soil(x,loamy)]  grows(x, rice) Abstract definition of "grows(x, rice)": water-in-soil(x, high) & temperature(x, warm) & soil(x, fertile-soil)  grows(x, rice)

49  2003, G.Tecuci, Learning Agents Laboratory 49 Learned knowledge: example abstraction Abstraction of Example 1: water-in-soil(Thailand, high) & temperature(Thailand, warm) & soil(Thailand, fertile-soil)  grows(Thailand, rice)

50  2003, G.Tecuci, Learning Agents Laboratory 50 Features of the MTL-JT method and reference Is general and extensible Integrates dynamically different elementary inferences Uses different types of generalizations Is able to learn from different types of input Is able to learn different types of knowledge Exhibits synergistic behavior May behave as any of the integrated strategies Tecuci, G., "An Inference-Based Framework for Multistrategy Learning," in Machine Learning: A Multistrategy Approach Volume 4, R.S. Michalski and G. Tecuci (Eds.), San Mateo, CA, Morgan Kaufmann, 1994.

51  2003, G.Tecuci, Learning Agents Laboratory 51 Features of the MTL-JT method Integrates dynamically different elementary inferences Justify the following feature:

52  2003, G.Tecuci, Learning Agents Laboratory 52 Features of the MTL-JT method Justify the following features: May behave as any of the integrated strategies Explanation-based learning Multiple-example explanation-based learning Learning by abduction Learning by analogy Inductive learning from examples What strategies should we consider for the presented illustration of MTL-PJT?

53  2003, G.Tecuci, Learning Agents Laboratory 53 MTL-JT as explanation-based learning  x, rainfall(x, heavy)   water-in-soil(x, high)  x, soil(x, red-soil)   soil(x, fertile-soil) Assume the KB would contain the knowledge:

54  2003, G.Tecuci, Learning Agents Laboratory 54 MTL-JT as abductive learning  x, rainfall(x, heavy)   water-in-soil(x, high) Assume the KB would contain the knowledge:

55  2003, G.Tecuci, Learning Agents Laboratory 55 MTL-JT as inductive learning from examples

56  2003, G.Tecuci, Learning Agents Laboratory 56 MTL-JT as analogical learning Let us suppose that the KB contains only the following knowledge that is related to Example 1: Facts:rainfall(Philippines, heavy), water-in-soil(Philippines, high) Determination: rainfall(x, y) --> water-in-soil(x, z) Then the system can only infer that "water-in-soil(Thailand, high)", by analogy with "water-in-soil(Philippines, high)". In this case, the MTL-JT method reduces to analogical learning.

57  2003, G.Tecuci, Learning Agents Laboratory 57 Overview Introduction Combining EBL with Version Spaces Induction over Unexplained Basic references Guiding Induction by Domain Theory Plausible Justification Trees Research Issues

58  2003, G.Tecuci, Learning Agents Laboratory 58 Research issues in multistrategy learning Comparisons of learning strategies New ways of integrating learning strategies Synergistic integration of a wide range of learning strategies The representation and use of learning goals in multistrategy systems Dealing with incomplete or noisy examples Evaluation of the certainty of the learned knowledge General frameworks for multistrategy learning More comprehensive theories of learning Investigation of human learning as multistrategy learning Integration of multistrategy learning and knowledge acquisition Integration of multistrategy learning and problem solving

59  2003, G.Tecuci, Learning Agents Laboratory 59 Exercise Compare the following learning strategies: -Rote learning -Inductive learning from examples -Explanation-based learning -Abductive learning -Analogical learning -Instance-based learning -Case-based learning From the point of view of their input, background knowledge, type of inferences performed, and effect on system’s performance.

60  2003, G.Tecuci, Learning Agents Laboratory 60 Exercise Identify general frameworks for multistrategy learning, based on the multistrategy learning methods presented.

61  2003, G.Tecuci, Learning Agents Laboratory 61 Basic references Proceedings of the International Conferences on Machine Learning, ICML-87, …, ICML-04, Morgan Kaufmann, San Mateo, Proceedings of the International Workshops on Multistrategy Learning, MSL-91, MSL-93, MSL-96, MSL-98. Special Issue on Multistrategy Learning, Machine Learning Journal, Special Issue on Multistrategy Learning, Informatica, vol. 17. no.4, Machine Learning: A Multistrategy Approach, Volume IV, Michalski R.S. and Tecuci G. (Eds.), Morgan Kaufmann, San Mateo, 1994.


Download ppt " 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 10."

Similar presentations


Ads by Google