Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture 35 of 42 Wednesday, 15 November.

Slides:



Advertisements
Similar presentations
Decision Trees Decision tree representation ID3 learning algorithm
Advertisements

CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 15 Nov, 1, 2011 Slide credit: C. Conati, S.
ICS320-Foundations of Adaptive and Learning Systems
Computing & Information Sciences Kansas State University Lecture 35 of 42 CIS 530 / 730 Artificial Intelligence Lecture 35 of 42 Machine Learning: Artificial.
Università di Milano-Bicocca Laurea Magistrale in Informatica
Computing & Information Sciences Kansas State University Lecture 33 of 42 CIS 530 / 730 Artificial Intelligence Lecture 33 of 42 Machine Learning: Version.
Decision Tree Algorithm
Computing & Information Sciences Kansas State University Lecture 34 of 42 CIS 530 / 730 Artificial Intelligence Lecture 34 of 42 Machine Learning: Decision.
Induction of Decision Trees
Machine Learning Lecture 10 Decision Trees G53MLE Machine Learning Dr Guoping Qiu1.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Computing & Information Sciences Kansas State University Lecture 01 of 42 Wednesday, 24 January 2008 William H. Hsu Department of Computing and Information.
Kansas State University Department of Computing and Information Sciences CIS 732: Machine Learning and Pattern Recognition Friday, 25 January 2008 William.
Machine Learning Chapter 3. Decision Tree Learning
Learning what questions to ask. 8/29/03Decision Trees2  Job is to build a tree that represents a series of questions that the classifier will ask of.
Artificial Neural Networks
For Wednesday No new reading Homework: –Chapter 18, exercises 3, 4, 7.
1 Artificial Neural Networks Sanun Srisuk EECP0720 Expert Systems – Artificial Neural Networks.
Kansas State University Department of Computing and Information Sciences CIS 732: Machine Learning and Pattern Recognition Wednesday, 08 February 2007.
Lecture 7. Outline 1. Overview of Classification and Decision Tree 2. Algorithm to build Decision Tree 3. Formula to measure information 4. Weka, data.
For Friday No reading No homework. Program 4 Exam 2 A week from Friday Covers 10, 11, 13, 14, 18, Take home due at the exam.
Machine Learning Lecture 10 Decision Tree Learning 1.
CpSc 810: Machine Learning Decision Tree Learning.
CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website:
Learning from Observations Chapter 18 Through
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
Computing & Information Sciences Kansas State University Monday, 13 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture 34 of 42 Monday, 13 November.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Monday, February 5, 2001.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
Kansas State University Department of Computing and Information Sciences CIS 732: Machine Learning and Pattern Recognition Thursday, 01 February 2007.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.3: Decision Trees Rodney Nielsen Many of.
Machine Learning Chapter 2. Concept Learning and The General-to-specific Ordering Tom M. Mitchell.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Monday, January 22, 2001 William.
Computing & Information Sciences Kansas State University Friday, 10 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture 33 of 42 Friday, 10 November.
For Wednesday No reading Homework: –Chapter 18, exercise 6.
Concept Learning and the General-to-Specific Ordering 이 종우 자연언어처리연구실.
Outline Inductive bias General-to specific ordering of hypotheses
Overview Concept Learning Representation Inductive Learning Hypothesis
CS690L Data Mining: Classification
For Monday No new reading Homework: –Chapter 18, exercises 3 and 4.
CS 8751 ML & KDDDecision Trees1 Decision tree representation ID3 learning algorithm Entropy, Information gain Overfitting.
Kansas State University Department of Computing and Information Sciences CIS 732: Machine Learning and Pattern Recognition Monday, 11 February 2008 William.
Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture 35 of 42 Wednesday, 15 November.
CS 5751 Machine Learning Chapter 3 Decision Tree Learning1 Decision Trees Decision tree representation ID3 learning algorithm Entropy, Information gain.
Decision Trees Binary output – easily extendible to multiple output classes. Takes a set of attributes for a given situation or object and outputs a yes/no.
1 Decision Tree Learning Original slides by Raymond J. Mooney University of Texas at Austin.
Decision Trees, Part 1 Reading: Textbook, Chapter 6.
Kansas State University Department of Computing and Information Sciences CIS 690: Implementation of High-Performance Data Mining Systems Wednesday, 21.
Classification (slides adapted from Rob Schapire) Eran Segal Weizmann Institute.
Kansas State University Department of Computing and Information Sciences CIS 732: Machine Learning and Pattern Recognition Wednesday, 31 January 2007.
Kansas State University Department of Computing and Information Sciences CIS 798: Intelligent Systems and Machine Learning Tuesday, September 7, 1999.
CS 8751 ML & KDDComputational Learning Theory1 Notions of interest: efficiency, accuracy, complexity Probably, Approximately Correct (PAC) Learning Agnostic.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Friday, Febuary 2, 2001 Presenter:Ajay.
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
Concept Learning and The General-To Specific Ordering
Machine Learning Recitation 8 Oct 21, 2009 Oznur Tastan.
Machine Learning Chapter 7. Computational Learning Theory Tom M. Mitchell.
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
Kansas State University Department of Computing and Information Sciences CIS 732: Machine Learning and Pattern Recognition Thursday, 25 January 2007 William.
CS 9633 Machine Learning Concept Learning
Artificial Intelligence
Analytical Learning Discussion (4 of 4):
Data Science Algorithms: The Basic Methods
Machine Learning Chapter 3. Decision Tree Learning
Machine Learning: Lecture 3
Machine Learning Chapter 3. Decision Tree Learning
Machine Learning Chapter 2
A task of induction to find patterns
A task of induction to find patterns
Machine Learning Chapter 2
Presentation transcript:

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture 35 of 42 Wednesday, 15 November 2006 William H. Hsu Department of Computing and Information Sciences, KSU KSOL course page: Course web site: Instructor home page: Reading for Next Class: Section 20.5, Russell & Norvig 2 nd edition Statistical Learning Discussion: ANNs and PS7

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture Outline Today’s Reading: Section 20.1, R&N 2e Friday’s Reading: Section 20.5, R&N 2e Machine Learning, Continued: Review Finding Hypotheses  Version spaces  Candidate elimination Decision Trees  Induction  Greedy learning  Entropy Perceptrons  Definitions, representation  Limitations

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence S2S2 = G 2 Example Trace S0S0 G0G0 d 1 : d 2 : d 3 : d 4 : = S 3 G3G3 S4S4 G4G4 S1S1 = G 1

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence An Unbiased Learner Example of A Biased H  Conjunctive concepts with don’t cares  What concepts can H not express? (Hint: what are its syntactic limitations?) Idea  Choose H’ that expresses every teachable concept  i.e., H’ is the power set of X  Recall: | A  B | = | B | | A | (A = X; B = {labels}; H’ = A  B)  {{Rainy, Sunny}  {Warm, Cold}  {Normal, High}  {None, Mild, Strong}  {Cool, Warm}  {Same, Change}}  {0, 1} An Exhaustive Hypothesis Language  Consider: H’ = disjunctions (  ), conjunctions (  ), negations (¬) over previous H  | H’ | = 2 ( ) = 2 96 ; | H | = 1 + ( ) = 973 What Are S, G For The Hypothesis Language H’?  S  disjunction of all positive examples  G  conjunction of all negated negative examples

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Decision Trees Classifiers: Instances (Unlabeled Examples) Internal Nodes: Tests for Attribute Values  Typical: equality test (e.g., “Wind = ?”)  Inequality, other tests possible Branches: Attribute Values  One-to-one correspondence (e.g., “Wind = Strong”, “Wind = Light”) Leaves: Assigned Classifications (Class Labels) Representational Power: Propositional Logic (Why?) Outlook? Humidity?Wind? Maybe SunnyOvercastRain YesNo HighNormal MaybeNo StrongLight Decision Tree for Concept PlayTennis

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Example: Decision Tree to Predict C-Section Risk Learned from Medical Records of 1000 Women Negative Examples are Cesarean Sections  Prior distribution: [833+, 167-] 0.83+,  Fetal-Presentation = 1: [822+, 116-]0.88+,  Previous-C-Section = 0: [767+, 81-] 0.90+, –Primiparous = 0: [399+, 13-]0.97+, –Primiparous = 1: [368+, 68-]0.84+, Fetal-Distress = 0: [334+, 47-]0.88+, – Birth-Weight  , – Birth-Weight < , Fetal-Distress = 1: [34+, 21-]0.62+,  Previous-C-Section = 1: [55+, 35-] 0.61+,  Fetal-Presentation = 2: [3+, 29-]0.11+,  Fetal-Presentation = 3: [8+, 22-]0.27+, 0.73-

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence [21+, 5-][8+, 30-] Decision Tree Learning: Top-Down Induction (ID3) A1A1 TrueFalse [29+, 35-] [18+, 33-][11+, 2-] A2A2 TrueFalse [29+, 35-] Algorithm Build-DT (Examples, Attributes) IF all examples have the same label THEN RETURN (leaf node with label) ELSE IF set of attributes is empty THEN RETURN (leaf with majority label) ELSE Choose best attribute A as root FOR each value v of A Create a branch out of the root for the condition A = v IF {x  Examples: x.A = v} = Ø THEN RETURN (leaf with majority label) ELSE Build-DT ({x  Examples: x.A = v}, Attributes ~ {A}) But Which Attribute Is Best?

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Choosing the “Best” Root Attribute Objective  Construct a decision tree that is a small as possible (Occam’s Razor)  Subject to: consistency with labels on training data Obstacles  Finding the minimal consistent hypothesis (i.e., decision tree) is NP-hard (D’oh!)  Recursive algorithm (Build-DT)  A greedy heuristic search for a simple tree  Cannot guarantee optimality (D’oh!) Main Decision: Next Attribute to Condition On  Want: attributes that split examples into sets that are relatively pure in one label  Result: closer to a leaf node  Most popular heuristic  Developed by J. R. Quinlan  Based on information gain  Used in ID3 algorithm

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence A Measure of Uncertainty  The Quantity  Purity: how close a set of instances is to having just one label  Impurity (disorder): how close it is to total uncertainty over labels  The Measure: Entropy  Directly proportional to impurity, uncertainty, irregularity, surprise  Inversely proportional to purity, certainty, regularity, redundancy Example  For simplicity, assume H = {0, 1}, distributed according to Pr(y)  Can have (more than 2) discrete class labels  Continuous random variables: differential entropy  Optimal purity for y: either  Pr(y = 0) = 1, Pr(y = 1) = 0  Pr(y = 1) = 1, Pr(y = 0) = 0  What is the least pure probability distribution?  Pr(y = 0) = 0.5, Pr(y = 1) = 0.5  Corresponds to maximum impurity/uncertainty/irregularity/surprise  Property of entropy: concave function (“concave downward”) Entropy: Intuitive Notion p + = Pr(y = +) 1.0 H(p) = Entropy(p)

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Entropy: Information Theoretic Definition Components  D: a set of examples {,, …, }  p + = Pr(c(x) = +), p - = Pr(c(x) = -) Definition  H is defined over a probability density function p  D contains examples whose frequency of + and - labels indicates p + and p - for the observed data  The entropy of D relative to c is: H(D)  -p + log b (p + ) - p - log b (p - ) What Units is H Measured In?  Depends on the base b of the log (bits for b = 2, nats for b = e, etc.)  A single bit is required to encode each example in the worst case (p + = 0.5)  If there is less uncertainty (e.g., p + = 0.8), we can use less than 1 bit each

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Information Gain: Information Theoretic Definition Partitioning on Attribute Values  Recall: a partition of D is a collection of disjoint subsets whose union is D  Goal: measure the uncertainty removed by splitting on the value of attribute A Definition  The information gain of D relative to attribute A is the expected reduction in entropy due to splitting (“sorting”) on A: where D v is {x  D: x.A = v}, the set of examples in D where attribute A has value v  Idea: partition on A; scale entropy to the size of each subset D v Which Attribute Is Best? [21+, 5-][8+, 30-] A1A1 TrueFalse [29+, 35-] [18+, 33-][11+, 2-] A2A2 TrueFalse [29+, 35-]

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Constructing A Decision Tree for PlayTennis using ID3 [1] Selecting The Root Attribute Prior (unconditioned) distribution: 9+, 5-  H(D) = -(9/14) lg (9/14) - (5/14) lg (5/14) bits = 0.94 bits  H(D, Humidity = High) = -(3/7) lg (3/7) - (4/7) lg (4/7) = bits  H(D, Humidity = Normal) = -(6/7) lg (6/7) - (1/7) lg (1/7) = bits  Gain(D, Humidity) = (7/14) * (7/14) * = bits  Similarly, Gain (D, Wind) = (8/14) * (6/14) * 1.0 = bits [6+, 1-][3+, 4-] Humidity HighNormal [9+, 5-] [3+, 3-][6+, 2-] Wind LightStrong [9+, 5-]

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Constructing A Decision Tree for PlayTennis using ID3 [2] Humidity?Wind? Yes NoYesNo Outlook? 1,2,3,4,5,6,7,8,9,10,11,12,13,14 [9+,5-] SunnyOvercastRain 1,2,8,9,11 [2+,3-] 3,7,12,13 [4+,0-] 4,5,6,10,14 [3+,2-] HighNormal 1,2,8 [0+,3-] 9,11 [2+,0-] StrongLight 6,14 [0+,2-] 4,5,10 [3+,0-]

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Decision Tree Overview Heuristic Search and Inductive Bias Decision Trees (DTs)  Can be boolean (c(x)  {+, -}) or range over multiple classes  When to use DT-based models Generic Algorithm Build-DT: Top Down Induction  Calculating best attribute upon which to split  Recursive partitioning Entropy and Information Gain  Goal: to measure uncertainty removed by splitting on a candidate attribute A  Calculating information gain (change in entropy)  Using information gain in construction of tree  ID3  Build-DT using Gain() ID3 as Hypothesis Space Search (in State Space of Decision Trees) Next: Artificial Neural Networks (Multilayer Perceptrons and Backprop) Tools to Try: WEKA, MLC++

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Perceptron Convergence Perceptron Convergence Theorem  Claim: If there exist a set of weights that are consistent with the data (i.e., the data is linearly separable), the perceptron learning algorithm will converge  Proof: well-founded ordering on search region (“wedge width” is strictly decreasing) - see Minsky and Papert,  Caveat 1: How long will this take?  Caveat 2: What happens if the data is not LS? Perceptron Cycling Theorem  Claim: If the training data is not LS the perceptron learning algorithm will eventually repeat the same set of weights and thereby enter an infinite loop  Proof: bound on number of weight changes until repetition; induction on n, the dimension of the training example vector - MP, How to Provide More Robustness, Expressivity?  Objective 1: develop algorithm that will find closest approximation (today)  Objective 2: develop architecture to overcome representational limitation (next lecture)

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Gradient Descent: Principle Understanding Gradient Descent for Linear Units  Consider simpler, unthresholded linear unit:  Objective: find “best fit” to D Approximation Algorithm  Quantitative objective: minimize error over training data set D  Error function: sum squared error (SSE) How to Minimize?  Simple optimization  Move in direction of steepest gradient in weight-error space Computed by finding tangent i.e. partial derivatives (of E) with respect to weights (w i )

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Gradient Descent: Derivation of Delta/LMS (Widrow-Hoff) Rule Definition: Gradient Modified Gradient Descent Training Rule

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Gradient Descent: Algorithm using Delta/LMS Rule Algorithm Gradient-Descent (D, r)  Each training example is a pair of the form, where x is the vector of input values and t(x) is the output value. r is the learning rate (e.g., 0.05)  Initialize all weights w i to (small) random values  UNTIL the termination condition is met, DO Initialize each  w i to zero FOR each in D, DO Input the instance x to the unit and compute the output o FOR each linear unit weight w i, DO  w i   w i + r(t - o)x i w i  w i +  w i  RETURN final w Mechanics of Delta Rule  Gradient is based on a derivative  Significance: later, will use nonlinear activation functions (aka transfer functions, squashing functions)

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence LS Concepts: Can Achieve Perfect Classification  Example A: perceptron training rule converges Non-LS Concepts: Can Only Approximate  Example B: not LS; delta rule converges, but can’t do better than 3 correct  Example C: not LS; better results from delta rule Weight Vector w = Sum of Misclassified x  D  Perceptron: minimize w  Delta Rule: minimize error  distance from separator (I.e., maximize ) Gradient Descent: Perceptron Rule versus Delta/LMS Rule Example A x1x1 x2x2 + + Example B - - x1x1 x2x2 Example C x1x1 x2x

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Winnow Algorithm Algorithm Train-Winnow (D)  Initialize:  = n, w i = 1  UNTIL the termination condition is met, DO FOR each in D, DO 1. CASE 1: no mistake - do nothing 2. CASE 2: t(x) = 1 but w  x <  - w i  2w i if x i = 1 (promotion/strengthening) 3. CASE 3: t(x) = 0 but w  x   - w i  w i / 2 if x i = 1 (demotion/weakening)  RETURN final w Winnow Algorithm Learns Linear Threshold (LT) Functions Converting to Disjunction Learning  Replace demotion with elimination  Change weight values to 0 instead of halving  Why does this work?

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Winnow : An Example t(x)  c(x) = x 1  x 2  x 1023  x 1024  Initialize:  = n = 1024, w = (1, 1, 1, …, 1)  w  x  ,w = (1, 1, 1, …, 1) OK  w  x < ,w = (1, 1, 1, …, 1) OK  w  x < ,w = (2, 1, 1, …, 1) mistake  w  x < ,w = (4, 1, 2, 2, …, 1) mistake  w  x < ,w = (8, 1, 4, 2, …, 2) mistake  …w = (512, 1, 256, 256, …, 256) Promotions for each good variable:  w  x  ,w = (512, 1, 256, 256, …, 256) OK  w  x  ,w = (512, 1, 0, 256, 0, 0, 0 …, 256) mistake  Last example: elimination rule (bit mask) Final Hypothesis: w = (1024, 1024, 0, 0, 0, 1, 32, …, 1024, 1024)

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence NeuroSolutions and SNNS NeuroSolutions 5.0 Specifications  Commercial ANN simulation environment ( for Windows NT  Supports multiple ANN architectures and training algorithms (temporal, modular)  Produces embedded systems Extensive data handling and visualization capabilities Fully modular (object-oriented) design Code generation and dynamic link library (DLL) facilities  Benefits Portability, parallelism: code tuning; fast offline learning Dynamic linking: extensibility for research and development Stuttgart Neural Network Simulator (SNNS) Specifications  Open source ANN simulation environment for Linux   Supports multiple ANN architectures and training algorithms  Very extensive visualization facilities  Similar portability and parallelization benefits