1 Machine Learning: Lecture 11 Analytical Learning / Explanation-Based Learning (Based on Chapter 11 of Mitchell, T., Machine Learning, 1997)

Slides:



Advertisements
Similar presentations
Explanation-Based Learning (borrowed from mooney et al)
Advertisements

Analytical Learning.
 2002, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci Deductive.
1 Machine Learning: Lecture 3 Decision Tree Learning (Based on Chapter 3 of Mitchell T.., Machine Learning, 1997)
Knowledge Representation and Reasoning Learning Sets of Rules and Analytical Learning Harris Georgiou – 4.
Combining Inductive and Analytical Learning Ch 12. in Machine Learning Tom M. Mitchell 고려대학교 자연어처리 연구실 한 경 수
Knowledge in Learning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 19 Spring 2004.
Business Research Methods William G. Zikmund
1 Machine Learning: Lecture 7 Instance-Based Learning (IBL) (Based on Chapter 8 of Mitchell T.., Machine Learning, 1997)
Introduction to Research Methodology
An Alternative Approach for Playing Complex Games like Chess. 1Alternative Game Playing Approach Jan Lemeire May 19 th 2008.
처음 페이지로 이동 Chapter 11: Analytical Learning Inductive learning training examples n Analytical learning prior knowledge + deductive reasoning n Explanation.
Problem Solving and Search in AI Part I Search and Intelligence Search is one of the most powerful approaches to problem solving in AI Search is a universal.
Learning control knowledge and case-based planning Jim Blythe, with additional slides from presentations by Manuela Veloso.
Knowledge in Learning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 19 Spring 2005.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
Relational Data Mining in Finance Haonan Zhang CFWin /04/2003.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 22 Jim Martin.
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
Inductive Logic Programming Includes slides by Luis Tari CS7741L16ILP.
Number Sense Standards Measurement and Geometry Statistics, Data Analysis and Probability CST Math 6 Released Questions Algebra and Functions 0 Questions.
Machine Learning Chapter 11. Analytical Learning
For Friday Read chapter 18, sections 3-4 Homework: –Chapter 14, exercise 12 a, b, d.
Empirical Explorations with The Logical Theory Machine: A Case Study in Heuristics by Allen Newell, J. C. Shaw, & H. A. Simon by Allen Newell, J. C. Shaw,
1 Machine Learning What is learning?. 2 Machine Learning What is learning? “That is what learning is. You suddenly understand something you've understood.
Machine Learning Chapter 11.
Machine Learning CSE 681 CH2 - Supervised Learning.
Problem Definition Chapter 7. Chapter Objectives Learn: –The 8 steps of experienced problem solvers –How to collect and analyze information and data.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Wednesday, January 19, 2000.
ECE 8443 – Pattern Recognition Objectives: Error Bounds Complexity Theory PAC Learning PAC Bound Margin Classifiers Resources: D.M.: Simplified PAC-Bayes.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Friday, February 4, 2000 Lijun.
November 10, Machine Learning: Lecture 9 Rule Learning / Inductive Logic Programming.
Machine Learning Chapter 2. Concept Learning and The General-to-specific Ordering Tom M. Mitchell.
CS 5751 Machine Learning Chapter 11 Explanation-Based Learning 1 Explanation-Based Learning (EBL) One definition: Learning general problem- solving techniques.
Machine Learning Chapter 5. Artificial IntelligenceChapter 52 Learning 1. Rote learning rote( โรท ) n. วิถีทาง, ทางเดิน, วิธีการตามปกติ, (by rote จากความทรงจำ.
Overview Concept Learning Representation Inductive Learning Hypothesis
Section 2 Scientific Methods Chapter 1 Bellringer Complete these two tasks: 1. Describe an advertisement that cites research results. 2. Answer this question:
Chapter 11 Statistical Techniques. Data Warehouse and Data Mining Chapter 11 2 Chapter Objectives  Understand when linear regression is an appropriate.
Explanation Based Learning (EBL) By M. Muztaba Fuad.
For Monday Finish chapter 19 No homework. Program 4 Any questions?
Generic Tasks by Ihab M. Amer Graduate Student Computer Science Dept. AUC, Cairo, Egypt.
For Monday Finish chapter 19 Take-home exam due. Program 4 Any questions?
CS Inductive Bias1 Inductive Bias: How to generalize on novel data.
CS 5751 Machine Learning Chapter 10 Learning Sets of Rules1 Learning Sets of Rules Sequential covering algorithms FOIL Induction as the inverse of deduction.
CpSc 810: Machine Learning Analytical learning. 2 Copy Right Notice Most slides in this presentation are adopted from slides of text book and various.
CS 5751 Machine Learning Chapter 12 Comb. Inductive/Analytical 1 Combining Inductive and Analytical Learning Why combine inductive and analytical learning?
How to structure good history writing Always put an introduction which explains what you are going to talk about. Always put a conclusion which summarises.
© 2010 South-Western/Cengage Learning. All rights reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
Chapter 3 By Samantha Thomsit. DIVERGENT THINKING A type of creative thinking that starts from a common point and moves outward to a variety of perspectives.
 In a small group create a definition of rigor as it applies in education.
SOCIOLOGY SOCIOLOGY RESEARCH DESIGN. RESEARCH AND THEORY Sociologists use the scientific method to examine society. We assume: Sociologists use the scientific.
Naive Bayes Classifier. REVIEW: Bayesian Methods Our focus this lecture: – Learning and classification methods based on probability theory. Bayes theorem.
RESEARCH METHODOLOGY Research and Development Research Approach Research Methodology Research Objectives Engr. Hassan Mehmood Khan.
CS 9633 Machine Learning Explanation Based Learning
Business Research Methods William G. Zikmund
CS 9633 Machine Learning Concept Learning
CS 9633 Machine Learning Inductive-Analytical Methods
Reasoning deduction, induction, abduction Problem solving
Ordering of Hypothesis Space
Business Research Methods William G. Zikmund
Knowledge in Learning Chapter 19
BBA V SEMESTER (BBA 502) DR. TABASSUM ALI
Machine Learning Chapter 2
Implementation of Learning Systems
Version Space Machine Learning Fall 2018.
Business Research Methods William G. Zikmund
LEARNING OUTCOMES After studying this chapter, you should
Machine Learning Chapter 2
Presentation transcript:

1 Machine Learning: Lecture 11 Analytical Learning / Explanation-Based Learning (Based on Chapter 11 of Mitchell, T., Machine Learning, 1997)

2 Overview §As discussed earlier, inductive learning methods require a certain number of training examples to generalize accurately. §Analytical learning stems from the idea that when not enough training examples are provided, it may be possible to “replace” the “missing” examples by prior knowledge and deductive reasoning. §Explanation-Based Learning is a particular type of analytical approach which uses prior knowledge to distinguish the relevant features of the training examples from the irrelevant, so that examples can be generalized based on logical rather than statistical reasoning.

3 Intuition about Explanation- Based Learning I §Figure 11.1 of [Mitchell, p.308] represents a positive example of the target concept: “chess position in which black will lose its queen within two moves”. §Inductive learning could eventually learn this concept with a large number (thousands?) of such examples. §However, that is not what human beings do: they learn from a restricted number of examples: they can even learn quite a lot from the single example in Figure 11.1.

4 Intuition about Explanation- Based Learning II §From the single board on Figure 11.1, humans can suggest the general hypothesis: “board positions in which the black king and queen are simultaneously attacked”. They would not even consider the (equally consistent) hypothesis “board positions in which four white pawns are still in their original position”! §They do so, because they rely heavily on explaining or analyzing the example in terms of their prior knowledge about the legal moves of chess. §Explanation-Based-Learning attempts to learn in the same fashion.

5 Analytical Learning: A Definition §Given a hypothesis space H a set of training examples D and a domain theory B consisting of background knowledge that can be used to explain observed training examples, the desired output of an analytical learner is a hypothesis h from H that is consistent with both the training examples D and the domain theory B. §Explanation-Based-Learning works by generalizing not from the training examples themselves, but from their explanation.

6 Learning with Perfect Domain Theories: Prolog-EBG Prolog-EBG(TargetConcept, TrainingExamples, DomainTheory) §LearnedRules <-- { } §Pos <-- the positive examples from TrainingExamples §for each PositiveExample in Pos that is not covered by LearnedRules, do 1. Explain: Explanation <-- an explanation (proof) in terms of the DomainTheory that PositiveExample satisfies the TargetConcept 2. Analyze: SufficientConditions <-- the most general set of features of PositiveExample sufficient to satisfy the TargetConcept according to the Explanation 3. Refine: LearnedRules <-- LearnedRules + NewHornClause, where NewHornClause is of the form: TargetConcept <-- SufficientConditions § Return LearnedRules

7 Summary of Prolog-EBG §Prolog-EBG produces justified general hypotheses. §The explanation of how the examples satisfy the target concept determines which examples attributes are relevant: those mentioned in the explanation. §Regressing the target concept to determine its weakest preimage allows deriving more general constraints on the value of the relevant features. §Each learned Horn Clause corresponds to a sufficient condition for satisfying the target concept. §The generality of the learned Horn clauses depend on the formulation of the domain theory and on the sequence in which the training data are presented. §Prolog-EBG implicitly assumes that the domain theory is correct and complete.

8 Different Perspectives on Explanation-Based-Learning (EBL) §EBL as theory-guided generalization of examples: EBL generalizes rationally from examples. §EBL as example-guided reformulation of theories: EBL can be viewed as a method for reformulating the domain theory into a more operational form. §EBL as “just” restating what the learner already knows: EBL proceeds by reformulating knowledge and this can sometimes be seen as an important kind of learning (the difference between knowing how to play chess and knowing how to play chess well, for example!)

9 EBL of Search Control Knowledge § Given EBL’s restriction to domains with a correct and complete domain theory, an important class of application is in speeding up complex search problems by learning how to control search. §Two well-known systems employ EBL in such a way: PRODIGY and SOAR. §In PRODIGY, the questions that need to be answered during the search problem are: “Which subgoals should be solved next?” and “Which operator should be considered for solving this subgoal?”. PRODIGY learns concepts such as “the set of states in which subgoal A should be solved before subgoal B”.

10 EBL of Search Control Knowledge §SOAR learns by explaining situations in which its current strategy leads to inefficiencies. More generally, SOAR uses a variant of EBL called chunking to extract the general conditions under which the same explanation applies. §SOAR has been applied in a great number of problem domain and has also been proposed as a psychologically plausible model of human learning processes.

11 Problems associated with applying EBL to Learning Search Control §In many cases, the number of control rules that must be learned is very large. As the system learns more and more control rules to improve its search, it must pay a larger and larger cost at each step to match this set of rules against the current search state. §In many cases, it is intractable to construct the explanations for the desired target concept.