Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, CA

Slides:



Advertisements
Similar presentations
Explanation-Based Learning (borrowed from mooney et al)
Advertisements

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Arizona State University and Institute for the Study of Learning and Expertise Expertise, Transfer, and Innovation in.
Center for the Study of Language and Information Stanford University, Stanford, California March 20-21, 2004 Symposium on Reasoning and Learning in Cognitive.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California and Center for the Study of Language and Information Stanford University,
Pat Langley Seth Rogers Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, CA Cumulative Learning of Relational and Hierarchical Skills.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Varieties of Problem Solving in a Unified Cognitive Architecture.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Extending the I CARUS Cognitive Architecture Thanks to D. Choi,
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Mental Simulation and Learning in the I CARUS Architecture.
General learning in multiple domains transfer of learning across domains Generality and Transfer in Learning training items test items training items test.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Modeling Social Cognition in a Unified Cognitive Architecture.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Information Processing Technology Office Learning Workshop April 12, 2004 Seedling Overview Learning Hierarchical Reactive Skills from Reasoning and Experience.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
IL Kickoff Meeting June 20-21, 2006 DARPA Integrated Learning POIROT Project 1 Learning Hierarchical Task Networks by Analyzing Expert Traces Pat Langley.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona A Cognitive Architecture for Integrated.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA A Unified Cognitive Architecture for Embodied Agents Thanks.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California A Cognitive Architecture for Complex Learning.
Modelling with expert systems. Expert systems Modelling with expert systems Coaching modelling with expert systems Advantages and limitations of modelling.
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Lecture 3 – Skills Theory
Planning CSE 473 Chapters 10.3 and 11. © D. Weld, D. Fox 2 Planning Given a logical description of the initial situation, a logical description of the.
Introductory Lecture. What is Discrete Mathematics? Discrete mathematics is the part of mathematics devoted to the study of discrete (as opposed to continuous)
Problem Solving and Search in AI Part I Search and Intelligence Search is one of the most powerful approaches to problem solving in AI Search is a universal.
Artificial Intelligence 2005/06
Data Mining with Decision Trees Lutz Hamel Dept. of Computer Science and Statistics University of Rhode Island.
Automated Changes of Problem Representation Eugene Fink LTI Retreat 2007.
Models of Human Performance Dr. Chris Baber. 2 Objectives Introduce theory-based models for predicting human performance Introduce competence-based models.
Automated Planning and HTNs Planning – A brief intro Planning – A brief intro Classical Planning – The STRIPS Language Classical Planning – The STRIPS.
Computational Thinking The VT Community web site:
Copyright R. Weber Machine Learning, Data Mining ISYS370 Dr. R. Weber.
Development and Theorists
General Purpose Case-Based Planning. General Purpose vs Domain Specific (Case-Based) Planning General purpose: symbolic descriptions of the problems and.
1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 1. Introduction.
Sampletalk Technology Presentation Andrew Gleibman
Learning Holy grail of AI. If we can build systems that learn, then we can begin with minimal information and high-level strategies and have systems better.
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
CS 415 – A.I. Slide Set 5. Chapter 3 Structures and Strategies for State Space Search – Predicate Calculus: provides a means of describing objects and.
Algorithms, Part 1 of 3 Topics  Definition of an Algorithm  Algorithm Examples  Syntax versus Semantics Reading  Sections
 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 9 Instance-Based.
© 2008 The McGraw-Hill Companies, Inc. Chapter 8: Cognition and Language.
KU NLP Machine Learning1 Ch 9. Machine Learning: Symbol- based  9.0 Introduction  9.1 A Framework for Symbol-Based Learning  9.2 Version Space Search.
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
© 2011 The McGraw-Hill Companies, Inc. Instructor name Class Title, Term/Semester, Year Institution Intelligence Introductory Psychology Concepts.
Introduction to Artificial Intelligence CS 438 Spring 2008.
Robust Planning using Constraint Satisfaction Techniques Daniel Buettner and Berthe Y. Choueiry Constraint Systems Laboratory Department of Computer Science.
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
© 2011 The McGraw-Hill Companies, Inc. Instructor name Class Title, Term/Semester, Year Institution Theories of Intelligence Introductory Psychology Concepts.
Of An Expert System.  Introduction  What is AI?  Intelligent in Human & Machine? What is Expert System? How are Expert System used? Elements of ES.
1 UMBC CMSC 104, Section Fall 2002 Algorithms, Part 1 of 3 Topics Definition of an Algorithm Algorithm Examples Syntax versus Semantics Reading.
Cognitive Architectures and General Intelligent Systems Pay Langley 2006 Presentation : Suwang Jang.
Son Thanh To Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California Dongkyu Choi Department of Aerospace Engineering University.
1 Artificial Intelligence & Prolog Programming CSL 302.
Pattern Recognition. What is Pattern Recognition? Pattern recognition is a sub-topic of machine learning. PR is the science that concerns the description.
HW5: Planning. PDDL Planning Domain Description Language Based on STRIPS with various extensions Originally defined by Drew McDermott (Yale) and others.
Introductory Lecture. What is Discrete Mathematics? Discrete mathematics is the part of mathematics devoted to the study of discrete (as opposed to continuous)
Biointelligence Lab School of Computer Sci. & Eng. Seoul National University Artificial Intelligence Chapter 8 Uninformed Search.
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
Artificial Intelligence
Knowledge Representation Part I Ontology Jan Pettersen Nytun Knowledge Representation Part I, JPN, UiA1.
Using Cognitive Science To Inform Instructional Design
Learning Teleoreactive Logic Programs by Observation
LEARNING.
Discrete Mathematics and Its Applications
Presentation transcript:

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, CA May 28, 2004 Computational Learning for Classification and Problem Solving

Definition of a Machine Learning System that improves task performance by acquiring knowledge based on partial task experience a software artifact

Elements of a Learning System experience/ environment knowledge learning method performance element

Elements of Classification Learning observed examples category descriptions learning mechanism classification mechanism

Five Paradigms for Classification Learning RuleInduction Decision-TreeInductionCase-BasedLearning NeuralNetworks ProbabilisticLearning

Category Learning in Humans  graded and imprecise nature of concepts  categories stored at different levels of generality  incremental processing of experience  effects of training order on learned knowledge  ability to learn from unlabeled cases  particular rates of category acquisition Human categorization exhibits clear characteristics: Many approaches to computational category learning ignore these phenomena.

Learning for Problem Solving problem-solving experience problem-solving knowledge learning mechanism problem solver

Operators for the Blocks World (pickup (?x) (on ?x ?t) (table ?t) (clear ?x) (arm-empty) => (on ?x ?t) (table ?t) (clear ?x) (arm-empty) => ( (holding ?x)) ( (on ?x ?t) (clear ?x) (arm-empty))) ( (holding ?x)) ( (on ?x ?t) (clear ?x) (arm-empty))) (unstack (?x ?y) (on ?x ?y) (block ?y) (clear ?x) (arm-empty) => (on ?x ?y) (block ?y) (clear ?x) (arm-empty) => ( (holding ?x) (clear ?y)) ( (on ?x ?y) (clear ?x) (arm-empty))) ( (holding ?x) (clear ?y)) ( (on ?x ?y) (clear ?x) (arm-empty))) (putdown (?x) (holding ?x) (table ?t) => (holding ?x) (table ?t) => ( (on ?x ?t) (arm-empty)) ( (holding ?x))) ( (on ?x ?t) (arm-empty)) ( (holding ?x))) (stack (?x ?y) (holding ?x) (block ?y) (clear ?y) (  ?x ?y) => (holding ?x) (block ?y) (clear ?y) (  ?x ?y) => ( (on ?x ?y) (arm-empty)) ( (holding ?x) (clear ?y))) ( (on ?x ?y) (arm-empty)) ( (holding ?x) (clear ?y))) Most formulations of the blocks world assume four operators: Each operator has a name, arguments, preconditions, an add list, and a delete list.

State Space for the Blocks World

Inducing Search-Control Knowledge One approach to learning for problem solving induces search- control rules from solution paths.  Given: A set of (possibly opaque) operators  Given: A test for achievement of goal states  Given: Solution paths that lead to goal states  Acquire: Control knowledge that improves problem solving This scheme involves recasting the task in terms of supervised concept learning. The resulting control rules reduce the effective branching factor during search.

Induction from Solution Paths For each operator O,  For each solution path P,  Mark all cases of operator O on P as positive instances  Mark all cases of operator O one step off P as negative instances  Induce rules that cover positive but not negative instances of O. One can use any supervised concept learning method to this end, including ones that rely on non-logical formalisms. However, many problem-solving domains require induction over relational descriptions.

Labeled Operators on a Solution Path

Search-Control Rules for the Blocks World ((holding ?x) (table ?t) (goal (on ?x ?y)) ( (clear ?y)) => (putdown ?x)) (goal (on ?x ?y)) ( (clear ?y)) => (putdown ?x)) ((holding ?x) (table ?t) (goal (on ?y ?x)) (goal (on ?z ?y)) => (putdown ?x)) (goal (on ?y ?x)) (goal (on ?z ?y)) => (putdown ?x)) ((holding ?x) (block ?y) (clear ?y) (  ?x ?y) (goal (on ?x ?y)) (on ?y ?z) (goal (on ?y ?z)) => (stack ?x ?y)) (goal (on ?x ?y)) (on ?y ?z) (goal (on ?y ?z)) => (stack ?x ?y)) ((on ?x ?y) (block ?y) (clear ?x) (arm-empty) (on ?y ?z) ( (goal (on ?y ?z)) => (unstack ?x ?y)) (on ?y ?z) ( (goal (on ?y ?z)) => (unstack ?x ?y)) ((on ?x ?y) (block ?y) (clear ?x) (arm-empty) ( (goal (on ?x ?y)) => (unstack ?x ?y)) ( (goal (on ?x ?y)) => (unstack ?x ?y)) Quinlan’s FOIL system induces a number of selection rules: Note that these rules are sensitive to the description of the goal.

Learning for Means-Ends Analysis Some problem-solving systems use means-ends analysis, which:  selects operators whose preconditions are not yet satisfied;  divides problems recursively into simpler subproblems. Similar techniques can learn control knowledge for this paradigm. Means-ends analysis has advantages over state-space search, in that it can transfer knowledge about solved subproblems. Much of the work on learning in planning systems has relied on this approach.

A Means-Ends Problem-Solving Trace

Forming Macro-Operators An alternative approach to learning for problem solving constructs macro-operators from solution paths.  Given: A set of transparent operators for a domain  Given: Solution paths that lead to goal states  Acquire: Sequences of operators that improve problem solving This scheme involves logically composing the conditions and effects of operators. The resulting macro-operators reduce the effective depth of search required to find a solution.

Partitioning a Solution into Macro-Operators

Human Learning and Problem Solving  reduced search with increased experience  reduced access to intermediate results  automatization and Einstellung effects  asymmetric transfer of expertise  incremental learning at particular rates Human learning in problem-solving domains exhibits: Computational methods for learning in problem solving address some but not all of these phenomena.

Cognitive Architectures and Learning  specifies the infrastructure that holds constant over domains, as opposed to knowledge, which can vary.  commits to representations and organizations of knowledge in long-term and short-term memory;  commits to performance processes that operate on these mental structures and learning mechanisms that generate them;  comes with a programming language for encoding knowledge and constructing intelligent systems. Many computational psychological models are cast within some theory of the human cognitive architecture that: Most architectures (e.g., ACT, Soar, I CARUS ) use rules or similar formalisms and focus on multi-step reasoning or problem solving.

Selected References Billman, D., Fisher, D., Gluck, M., Langley, P., & Pazzani, M. (1990). Computational models of category learning. Proceedings of the Thirteenth Annual Conference of the Cognitive Science Society (pp ). Cambridge, MA: Lawrence Erlbaum. Langley, P. (1995). Elements of machine learning. San Francisco: Morgan Kaufmann. Shavlik, J. W., & Dietterich, T. G. (Eds.). (1990). Readings in machine learning. San Francisco: Morgan Kaufmann. VanLehn, K. (1989). Problem solving and cognitive skill acquisition. In M. I. Posner (Ed.), Foundations of cognitive science. Cambridge, MA: MIT Press.

End of Presentation