Machine Learning, Decision Trees, Overfitting Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University January 14,

Slides:



Advertisements
Similar presentations
1 Machine Learning: Lecture 3 Decision Tree Learning (Based on Chapter 3 of Mitchell T.., Machine Learning, 1997)
Advertisements

Decision Trees Decision tree representation ID3 learning algorithm
CHAPTER 2: Supervised Learning. Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Learning a Class from Examples.
CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 15 Nov, 1, 2011 Slide credit: C. Conati, S.
1er. Escuela Red ProTIC - Tandil, de Abril, Decision Tree Learning 3.1 Introduction –Method for approximation of discrete-valued target functions.
Huffman code and ID3 Prof. Sin-Min Lee Department of Computer Science.
Decision Tree Approach in Data Mining
ICS320-Foundations of Adaptive and Learning Systems
Decision Tree Learning 主講人:虞台文 大同大學資工所 智慧型多媒體研究室.
PAC Learning adapted from Tom M.Mitchell Carnegie Mellon University.
Part 7.3 Decision Trees Decision tree representation ID3 learning algorithm Entropy, information gain Overfitting.
CS 590M Fall 2001: Security Issues in Data Mining Lecture 4: ID3.
Machine Learning: Foundations Yishay Mansour Tel-Aviv University.
Decision Trees Decision tree representation Top Down Construction
Data Mining with Decision Trees Lutz Hamel Dept. of Computer Science and Statistics University of Rhode Island.
Machine Learning CSE 473. © Daniel S. Weld Topics Agency Problem Spaces Search Knowledge Representation Reinforcement Learning InferencePlanning.
Ch 3. Decision Tree Learning
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
MACHINE LEARNING. What is learning? A computer program learns if it improves its performance at some task through experience (T. Mitchell, 1997) A computer.
Part I: Classification and Bayesian Learning
Introduction to machine learning
1 CSE 446 Machine Learning Daniel Weld Xiao Ling Congle Zhang.
For Monday No reading Homework: –Chapter 18, exercises 1 and 2.
Fall 2004 TDIDT Learning CS478 - Machine Learning.
Machine Learning Chapter 3. Decision Tree Learning
Copyright R. Weber Machine Learning, Data Mining ISYS370 Dr. R. Weber.
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
For Wednesday No new reading Homework: –Chapter 18, exercises 3, 4, 7.
For Monday Read chapter 18, sections 5-6 Homework: –Chapter 18, exercises 1-2.
Decision tree learning Maria Simi, 2010/2011 Inductive inference with decision trees  Decision Trees is one of the most widely used and practical methods.
Decision Trees & the Iterative Dichotomiser 3 (ID3) Algorithm David Ramos CS 157B, Section 1 May 4, 2006.
For Friday No reading No homework. Program 4 Exam 2 A week from Friday Covers 10, 11, 13, 14, 18, Take home due at the exam.
Machine Learning Lecture 10 Decision Tree Learning 1.
CpSc 810: Machine Learning Decision Tree Learning.
CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website:
Learning from Observations Chapter 18 Through
CS 445/545 Machine Learning Winter, 2012 Course overview: –Instructor Melanie Mitchell –Textbook Machine Learning: An Algorithmic Approach by Stephen Marsland.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.3: Decision Trees Rodney Nielsen Many of.
For Wednesday No reading Homework: –Chapter 18, exercise 6.
For Monday No new reading Homework: –Chapter 18, exercises 3 and 4.
CS 8751 ML & KDDDecision Trees1 Decision tree representation ID3 learning algorithm Entropy, Information gain Overfitting.
Concept learning, Regression Adapted from slides from Alpaydin’s book and slides by Professor Doina Precup, Mcgill University.
1 Universidad de Buenos Aires Maestría en Data Mining y Knowledge Discovery Aprendizaje Automático 5-Inducción de árboles de decisión (2/2) Eduardo Poggi.
CS 5751 Machine Learning Chapter 3 Decision Tree Learning1 Decision Trees Decision tree representation ID3 learning algorithm Entropy, Information gain.
1 Decision Tree Learning Original slides by Raymond J. Mooney University of Texas at Austin.
CS Inductive Bias1 Inductive Bias: How to generalize on novel data.
INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN © The MIT Press, Lecture.
Decision Tree Learning
CS 445/545 Machine Learning Winter, 2014 See syllabus at
Decision Tree Learning Presented by Ping Zhang Nov. 26th, 2007.
Data Mining and Decision Support
Machine Learning ICS 178 Instructor: Max Welling Supervised Learning.
CS 8751 ML & KDDComputational Learning Theory1 Notions of interest: efficiency, accuracy, complexity Probably, Approximately Correct (PAC) Learning Agnostic.
MACHINE LEARNING 3. Supervised Learning. Learning a Class from Examples Based on E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
Machine Learning Chapter 7. Computational Learning Theory Tom M. Mitchell.
1 CS 391L: Machine Learning: Computational Learning Theory Raymond J. Mooney University of Texas at Austin.
Decision Tree Learning
Artificial Intelligence
Decision Trees (suggested time: 30 min)
Data Science Algorithms: The Basic Methods
Machine Learning Chapter 3. Decision Tree Learning
Computational Learning Theory
Machine Learning: Lecture 3
Decision Trees Decision tree representation ID3 learning algorithm
Computational Learning Theory
Machine Learning Chapter 3. Decision Tree Learning
Decision Trees Berlin Chen
CS639: Data Management for Data Science
Presentation transcript:

Machine Learning, Decision Trees, Overfitting Machine Learning Tom M. Mitchell Machine Learning Department Carnegie Mellon University January 14, 2008 Reading: Mitchell, Chapter 3

Machine Learning Instructors William Cohen Tom Mitchell TA’s Andrew Arnold Mary McGlohon Course assistant Sharon Cavlovich webpage: See webpage for Office hours Grading policy Final exam date Late homework policy Syllabus details...

Machine Learning: Study of algorithms that improve their performance P at some task T with experience E well-defined learning task:

Learning to Predict Emergency C-Sections 9714 patient records, each with 215 features [Sims et al., 2000]

Learning to detect objects in images Example training images for each orientation (Prof. H. Schneiderman)

Learning to classify text documents Company home page vs Personal home page vs University home page vs …

Reading a noun (vs verb) [Rustandi et al., 2005]

Machine Learning - Practice Object recognition Mining Databases Speech Recognition Control learning Supervised learning Bayesian networks Hidden Markov models Unsupervised clustering Reinforcement learning.... Text analysis

Machine Learning - Theory PAC Learning Theory # examples (m) representational complexity (H) error rate (  ) failure probability (  ) Other theories for Reinforcement skill learning Semi-supervised learning Active student querying … … also relating: # of mistakes during learning learner’s query strategy convergence rate asymptotic performance bias, variance (supervised concept learning)

Growth of Machine Learning Machine learning already the preferred approach to –Speech recognition, Natural language processing –Computer vision –Medical outcomes analysis –Robot control –… This ML niche is growing –Improved machine learning algorithms –Increased data capture, networking –Software too complex to write by hand –New sensors / IO devices –Demand for self-customization to user, environment All software apps. ML apps.

Function Approximation and Decision tree learning

Function approximation Setting: Set of possible instances X Unknown target function f: X  Y Set of function hypotheses H ={ h | h : X  Y } Given: Training examples { } of unknown target function f Determine: Hypothesis h  H that best approximates f

Each internal node: test one attribute X i Each branch from a node: selects one value for X i Each leaf node: predict Y (or P(Y|X  leaf)) How would you represent AB  CD(  E)?

node = Root [ID3, C4.5, …]

Entropy Entropy H(X) of a random variable X H(X) is the expected number of bits needed to encode a randomly drawn value of X (under most efficient code) Why? Information theory: Most efficient code assigns -log 2 P ( X=i ) bits to encode the message X=i So, expected number of bits to code one random X is: # of possible values for X

Entropy Entropy H(X) of a random variable X Specific conditional entropy H(X|Y=v) of X given Y=v : Conditional entropy H(X|Y) of X given Y : Mututal information (aka information gain) of X and Y :

Sample Entropy

Subset of S for which A=v Gain(S,A) = mutual information between A and target class variable over sample S

Decision Tree Learning Applet earning/DecisionTrees/Applet/DecisionT reeApplet.htmlhttp:// earning/DecisionTrees/Applet/DecisionT reeApplet.html

Which Tree Should We Output? ID3 performs heuristic search through space of decision trees It stops at smallest acceptable tree. Why? Occam’s razor: prefer the simplest hypothesis that fits the data

Why Prefer Short Hypotheses? (Occam’s Razor) Argument in favor: Fewer short hypotheses than long ones  a short hypothesis that fits the data is less likely to be a statistical coincidence  highly probable that a sufficiently complex hypothesis will fit the data Argument opposed: Also fewer hypotheses with prime number of nodes and attributes beginning with “Z” What’s so special about “short” hypotheses?

Split data into training and validation set Create tree that classifies training set correctly

What you should know: Well posed function approximation problems: –Instance space, X –Sample of labeled training data { } –Hypothesis space, H = { f: X  Y } Learning is a search/optimization problem over H –Various objective functions minimize training error (0-1 loss) among hypotheses that minimize training error, select shortest Decision tree learning –Greedy top-down learning of decision trees (ID3, C4.5,...) –Overfitting and tree/rule post-pruning –Extensions…

Questions to think about (1) Why use Information Gain to select attributes in decision trees? What other criteria seem reasonable, and what are the tradeoffs in making this choice?

Questions to think about (2) ID3 and C4.5 are heuristic algorithms that search through the space of decision trees. Why not just do an exhaustive search?

Questions to think about (3) Consider target function f:  y, where x1 and x2 are real-valued, y is boolean. What is the set of decision surfaces describable with decision trees that use each attribute at most once?