Presentation is loading. Please wait.

Presentation is loading. Please wait.

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California

Similar presentations


Presentation on theme: "Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California"— Presentation transcript:

1 Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California http://cll.stanford.edu/~langley/ Machine Learning for Cognitive Networks Thanks to Chris Ramming and Tom Dietterich for discussions that led to many of these ideas.

2 Definition of a Machine Learning System that improves task performance by acquiring knowledge based on partial task experience a software artifact

3 Elements of a Machine Learning System environment knowledge learning algorithm performance element

4 Five Representational Paradigms logical rules neural networks decision trees probabilistic formalisms case libraries

5 Three Formulations of Learning Problems Learning for classification and regression ; Learning for classification and regression ; Learning for action and planning ; or Learning for action and planning ; or Learning for interpretation and understanding. Learning for interpretation and understanding. A more basic decision than choice of representational framework is whether one formulates the problem as: These paradigms differ in their performance task, i.e., the manner in which the learned knowledge is utilized.

6 Learning for Classification and Regression Supervised learning – from labeled training cases Supervised learning – from labeled training cases Unsupervised learning – from unlabeled training cases Unsupervised learning – from unlabeled training cases Semi-supervised learning – from partly labeled cases Semi-supervised learning – from partly labeled cases Learned knowledge can be used to classify a new instance or to predict the value for one of its numeric attributes, as in: These are most basic, best-studied induction tasks, which has led to development of robust algorithms for them. Such methods have been used in many successful applications, and they form the backbone of commercial data-mining systems.

7 Learning for Action and Planning Adaptive interfaces – learn from interaction with user Adaptive interfaces – learn from interaction with user Behavioral cloning – learn from behavioral traces Behavioral cloning – learn from behavioral traces Empirical optimization – from varying control parameters Empirical optimization – from varying control parameters Reinforcement learning – from delayed reward signals Reinforcement learning – from delayed reward signals Learning from problem solving – from the results of search Learning from problem solving – from the results of search Learned knowledge can be used to decide which action to execute or which choice to make during problem solving, as in: Progress on these formulations is at different stages, with some used in commerce and others needing more basic research.

8 Learning for Understanding Structured induction –from trainer-explained instances Structured induction –from trainer-explained instances Constructive induction – from self-explained training cases Constructive induction – from self-explained training cases Generative induction – learn structures needed for explanation Generative induction – learn structures needed for explanation ––––––––––––––––––––– ––––––––––––––––––––– Parameter estimation – from training cases given structures Parameter estimation – from training cases given structures Theory revision – revise structures based on training cases Theory revision – revise structures based on training cases Learned knowledge can be used to interpret, understand, or explain situations or events, as in: Research in these frameworks is less mature than others, but holds great potential for combining learning with reasoning.

9 Comments about Problem Formulations Supervised learning from labeled examples of network faults Supervised learning from labeled examples of network faults Unsupervised learning from anomalous network behaviors Unsupervised learning from anomalous network behaviors Behavioral cloning from traces of network managers responses Behavioral cloning from traces of network managers responses Reinforcement learning from experience with sensing actions Reinforcement learning from experience with sensing actions Constructive induction from explanations of network faults Constructive induction from explanations of network faults With respect to the Knowledge Plane, it is important to realize that one can view a given task in different ways. For example, one can formulate diagnostic problems as either: We need measures of progress that focus on networking rather than to specific problem formulations.

10 Challenges in Experimental Evaluation Dependent measures – related to network management tasks Dependent measures – related to network management tasks Independent variables Independent variables Amount of experience – to determine rate of learning Amount of experience – to determine rate of learning Complexity of task and data – to determine robustness Complexity of task and data – to determine robustness System modules and knowledge – to infer sources of power System modules and knowledge – to infer sources of power Data sets and test beds – to support the experimental process Data sets and test beds – to support the experimental process To evaluate learning methods for the Knowledge Plane, we need: The goal of experimentation is to promote scientific understanding, not to show that one method is better than another.


Download ppt "Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California"

Similar presentations


Ads by Google