Presentation is loading. Please wait.

Presentation is loading. Please wait.

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, CA

Similar presentations


Presentation on theme: "Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, CA"— Presentation transcript:

1 Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, CA http://cll.stanford.edu/~langley/ May 28, 2004 Computational Learning for Classification and Problem Solving

2 Definition of a Machine Learning System that improves task performance by acquiring knowledge based on partial task experience a software artifact

3 Elements of a Learning System experience/ environment knowledge learning method performance element

4 Elements of Classification Learning observed examples category descriptions learning mechanism classification mechanism

5 Five Paradigms for Classification Learning RuleInduction Decision-TreeInductionCase-BasedLearning NeuralNetworks ProbabilisticLearning

6

7

8

9

10

11

12

13

14 Category Learning in Humans  graded and imprecise nature of concepts  categories stored at different levels of generality  incremental processing of experience  effects of training order on learned knowledge  ability to learn from unlabeled cases  particular rates of category acquisition Human categorization exhibits clear characteristics: Many approaches to computational category learning ignore these phenomena.

15

16 Learning for Problem Solving problem-solving experience problem-solving knowledge learning mechanism problem solver

17 Operators for the Blocks World (pickup (?x) (on ?x ?t) (table ?t) (clear ?x) (arm-empty) => (on ?x ?t) (table ?t) (clear ?x) (arm-empty) => ( (holding ?x)) ( (on ?x ?t) (clear ?x) (arm-empty))) ( (holding ?x)) ( (on ?x ?t) (clear ?x) (arm-empty))) (unstack (?x ?y) (on ?x ?y) (block ?y) (clear ?x) (arm-empty) => (on ?x ?y) (block ?y) (clear ?x) (arm-empty) => ( (holding ?x) (clear ?y)) ( (on ?x ?y) (clear ?x) (arm-empty))) ( (holding ?x) (clear ?y)) ( (on ?x ?y) (clear ?x) (arm-empty))) (putdown (?x) (holding ?x) (table ?t) => (holding ?x) (table ?t) => ( (on ?x ?t) (arm-empty)) ( (holding ?x))) ( (on ?x ?t) (arm-empty)) ( (holding ?x))) (stack (?x ?y) (holding ?x) (block ?y) (clear ?y) (  ?x ?y) => (holding ?x) (block ?y) (clear ?y) (  ?x ?y) => ( (on ?x ?y) (arm-empty)) ( (holding ?x) (clear ?y))) ( (on ?x ?y) (arm-empty)) ( (holding ?x) (clear ?y))) Most formulations of the blocks world assume four operators: Each operator has a name, arguments, preconditions, an add list, and a delete list.

18 State Space for the Blocks World

19 Inducing Search-Control Knowledge One approach to learning for problem solving induces search- control rules from solution paths.  Given: A set of (possibly opaque) operators  Given: A test for achievement of goal states  Given: Solution paths that lead to goal states  Acquire: Control knowledge that improves problem solving This scheme involves recasting the task in terms of supervised concept learning. The resulting control rules reduce the effective branching factor during search.

20 Induction from Solution Paths For each operator O,  For each solution path P,  Mark all cases of operator O on P as positive instances  Mark all cases of operator O one step off P as negative instances  Induce rules that cover positive but not negative instances of O. One can use any supervised concept learning method to this end, including ones that rely on non-logical formalisms. However, many problem-solving domains require induction over relational descriptions.

21 Labeled Operators on a Solution Path

22 Search-Control Rules for the Blocks World ((holding ?x) (table ?t) (goal (on ?x ?y)) ( (clear ?y)) => (putdown ?x)) (goal (on ?x ?y)) ( (clear ?y)) => (putdown ?x)) ((holding ?x) (table ?t) (goal (on ?y ?x)) (goal (on ?z ?y)) => (putdown ?x)) (goal (on ?y ?x)) (goal (on ?z ?y)) => (putdown ?x)) ((holding ?x) (block ?y) (clear ?y) (  ?x ?y) (goal (on ?x ?y)) (on ?y ?z) (goal (on ?y ?z)) => (stack ?x ?y)) (goal (on ?x ?y)) (on ?y ?z) (goal (on ?y ?z)) => (stack ?x ?y)) ((on ?x ?y) (block ?y) (clear ?x) (arm-empty) (on ?y ?z) ( (goal (on ?y ?z)) => (unstack ?x ?y)) (on ?y ?z) ( (goal (on ?y ?z)) => (unstack ?x ?y)) ((on ?x ?y) (block ?y) (clear ?x) (arm-empty) ( (goal (on ?x ?y)) => (unstack ?x ?y)) ( (goal (on ?x ?y)) => (unstack ?x ?y)) Quinlan’s FOIL system induces a number of selection rules: Note that these rules are sensitive to the description of the goal.

23 Learning for Means-Ends Analysis Some problem-solving systems use means-ends analysis, which:  selects operators whose preconditions are not yet satisfied;  divides problems recursively into simpler subproblems. Similar techniques can learn control knowledge for this paradigm. Means-ends analysis has advantages over state-space search, in that it can transfer knowledge about solved subproblems. Much of the work on learning in planning systems has relied on this approach.

24 A Means-Ends Problem-Solving Trace

25 Forming Macro-Operators An alternative approach to learning for problem solving constructs macro-operators from solution paths.  Given: A set of transparent operators for a domain  Given: Solution paths that lead to goal states  Acquire: Sequences of operators that improve problem solving This scheme involves logically composing the conditions and effects of operators. The resulting macro-operators reduce the effective depth of search required to find a solution.

26 Partitioning a Solution into Macro-Operators

27 Human Learning and Problem Solving  reduced search with increased experience  reduced access to intermediate results  automatization and Einstellung effects  asymmetric transfer of expertise  incremental learning at particular rates Human learning in problem-solving domains exhibits: Computational methods for learning in problem solving address some but not all of these phenomena.

28 Cognitive Architectures and Learning  specifies the infrastructure that holds constant over domains, as opposed to knowledge, which can vary.  commits to representations and organizations of knowledge in long-term and short-term memory;  commits to performance processes that operate on these mental structures and learning mechanisms that generate them;  comes with a programming language for encoding knowledge and constructing intelligent systems. Many computational psychological models are cast within some theory of the human cognitive architecture that: Most architectures (e.g., ACT, Soar, I CARUS ) use rules or similar formalisms and focus on multi-step reasoning or problem solving.

29 Selected References Billman, D., Fisher, D., Gluck, M., Langley, P., & Pazzani, M. (1990). Computational models of category learning. Proceedings of the Thirteenth Annual Conference of the Cognitive Science Society (pp. 989-996). Cambridge, MA: Lawrence Erlbaum. Langley, P. (1995). Elements of machine learning. San Francisco: Morgan Kaufmann. Shavlik, J. W., & Dietterich, T. G. (Eds.). (1990). Readings in machine learning. San Francisco: Morgan Kaufmann. VanLehn, K. (1989). Problem solving and cognitive skill acquisition. In M. I. Posner (Ed.), Foundations of cognitive science. Cambridge, MA: MIT Press.

30 End of Presentation


Download ppt "Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, CA"

Similar presentations


Ads by Google