Naïve Bayes: discussion

Slides:



Advertisements
Similar presentations
Rule-Based Classifiers. Rule-Based Classifier Classify records by using a collection of “if…then…” rules Rule: (Condition)  y –where Condition is a conjunctions.
Advertisements

Decision Tree Approach in Data Mining
Classification Techniques: Decision Tree Learning
Classification Algorithms – Continued. 2 Outline  Rules  Linear Models (Regression)  Instance-based (Nearest-neighbor)
Classification Algorithms – Continued. 2 Outline  Rules  Linear Models (Regression)  Instance-based (Nearest-neighbor)
1 Input and Output Thanks: I. Witten and E. Frank.
Primer Parcial -> Tema 3 Minería de Datos Universidad del Cauca.
Decision Trees.
Algorithms: The basic methods. Inferring rudimentary rules Simplicity first Simple algorithms often work surprisingly well Many different kinds of simple.
Decision Trees Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei Han.
Instance-based representation
Constructing Decision Trees. A Decision Tree Example The weather data example. ID codeOutlookTemperatureHumidityWindyPlay abcdefghijklmnabcdefghijklmn.
1 Classification with Decision Trees Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei.
Classification: Decision Trees
1 Classification with Decision Trees I Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei.
1 Bayesian Classification Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Dan Weld, Eibe Frank.
Algorithms for Classification: Notes by Gregory Piatetsky.
Decision Trees an Introduction.
Review. 2 Statistical modeling  “Opposite” of 1R: use all the attributes  Two assumptions: Attributes are  equally important  statistically independent.
Covering Algorithms. Trees vs. rules From trees to rules. Easy: converting a tree into a set of rules –One rule for each leaf: –Antecedent contains a.
Inferring rudimentary rules
Algorithms for Classification: The Basic Methods.
Probabilistic techniques. Machine learning problem: want to decide the classification of an instance given various attributes. Data contains attributes.
Classification: Decision Trees 2 Outline  Top-Down Decision Tree Construction  Choosing the Splitting Attribute  Information Gain and Gain Ratio.
Chapter 6 Decision Trees
Machine Learning Chapter 3. Decision Tree Learning
Learning what questions to ask. 8/29/03Decision Trees2  Job is to build a tree that represents a series of questions that the classifier will ask of.
Decision Trees.
Classification II. 2 Numeric Attributes Numeric attributes can take many values –Creating branches for each value is not ideal The value range is usually.
Data Mining – Algorithms: Prism – Learning Rules via Separating and Covering Chapter 4, Section 4.4.
Classification I. 2 The Task Input: Collection of instances with a set of attributes x and a special nominal attribute Y called class attribute Output:
Lecture 7. Outline 1. Overview of Classification and Decision Tree 2. Algorithm to build Decision Tree 3. Formula to measure information 4. Weka, data.
Decision-Tree Induction & Decision-Rule Induction
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.3: Decision Trees Rodney Nielsen Many of.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.4: Covering Algorithms Rodney Nielsen Many.
Slides for “Data Mining” by I. H. Witten and E. Frank.
Data Mining – Algorithms: Decision Trees - ID3 Chapter 4, Section 4.3.
CS690L Data Mining: Classification
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.2 Statistical Modeling Rodney Nielsen Many.
 Classification 1. 2  Task: Given a set of pre-classified examples, build a model or classifier to classify new cases.  Supervised learning: classes.
Algorithms for Classification: The Basic Methods.
Slide 1 DSCI 4520/5240: Data Mining Fall 2013 – Dr. Nick Evangelopoulos Lecture 5: Decision Tree Algorithms Material based on: Witten & Frank 2000, Olson.
Example: input data outlooktemp.humiditywindyplay sunnyhothighfalseno sunnyhothightrueno overcasthothighfalseyes rainymildhighfalseyes rainycoolnormalfalseyes.
1 Data Mining I Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF.
Data Management and Database Technologies 1 DATA MINING Extracting Knowledge From Data Petr Olmer CERN
1 Classification with Decision Trees Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei.
Chapter 4: Algorithms CS 795. Inferring Rudimentary Rules 1R – Single rule – one level decision tree –Pick each attribute and form a single level tree.
Classification Algorithms Covering, Nearest-Neighbour.
Decision Trees by Muhammad Owais Zahid
Data Mining CH6 Implementation: Real machine learning schemes(2) Reporter: H.C. Tsai.
Data Mining Chapter 4 Algorithms: The Basic Methods Reporter: Yuen-Kuei Hsueh.
Data Mining Chapter 4 Algorithms: The Basic Methods - Constructing decision trees Reporter: Yuen-Kuei Hsueh Date: 2008/7/24.
Review of Decision Tree Learning Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
DECISION TREES An internal node represents a test on an attribute.
Decision Trees an introduction.
Classification Algorithms
Data Science Algorithms: The Basic Methods
Artificial Intelligence
Ch9: Decision Trees 9.1 Introduction A decision tree:
Data Science Algorithms: The Basic Methods
Data Science Algorithms: The Basic Methods
Decision Tree Saed Sayad 9/21/2018.
Advanced Artificial Intelligence
Clustering.
Classification with Decision Trees
Classification Algorithms
Dept. of Computer Science University of Liverpool
Data Mining(中国人民大学) Yang qiang(香港科技大学) Han jia wei(UIUC)
Junheng, Shengming, Yunsheng 10/19/2018
Data Mining CSCI 307, Spring 2019 Lecture 15
Presentation transcript:

Naïve Bayes: discussion Naïve Bayes works surprisingly well (even if independence assumption is clearly violated) Why? Because classification doesn’t require accurate probability estimates as long as maximum probability is assigned to correct class However: adding too many redundant attributes will cause problems (e.g. identical attributes) Note also: many numeric attributes are not normally distributed.

Decision trees another statistical, supervised learning technique builds a model from training data – the model is a decision tree. internal nodes query the attributes leaf nodes announce the class of the instance

Constructing decision trees Strategy: top down Recursive divide-and-conquer fashion First: select attribute for root node Create branch for each possible attribute value Then: split instances into subsets One for each branch extending from the node Finally: repeat recursively for each branch, using only instances that reach the branch Stop if all instances have the same class

Weather Data - revisited Outlook Temp Humidity Windy Play Sunny Hot High False No True Overcast Yes Rainy Mild Cool Normal Attribute Rules Errors Total errors Outlook Sunny  No 2/5 4/14 Overcast  Yes 0/4 Rainy  Yes Temp Hot  No* 2/4 5/14 Mild  Yes 2/6 Cool  Yes 1/4 Humidity High  No 3/7 Normal  Yes 1/7 Windy False  Yes 2/8 True  No* 3/6 4

Which attribute to select?

Criterion for attribute selection Which is the best attribute? Want to get the smallest tree Heuristic: choose the attribute that produces the “purest” nodes Popular impurity criterion: information gain Information gain increases with the average purity of the subsets Strategy: choose attribute that gives greatest information gain

Computing information Measure information in bits Given a probability distribution, the info required to predict an event is the distribution’s entropy Entropy gives the information required in bits (can involve fractions of bits!) Formula for computing the entropy:

Entropy measure: some examples As the data become purer and purer, the entropy value becomes smaller and smaller. This is useful to us! 8

Information gain Given a set of examples D, we first compute its entropy: If we make attribute Ai (with v values) the root of the current tree, this will partition D into v subsets D1, D2 …, Dv . The expected entropy if Ai is used as the current root: 9

Information gain (continued) Information gained by selecting attribute Ai to branch or to partition the data is We choose the attribute with the highest gain to branch/split the current tree. 10

Example: attribute Outlook Outlook = Sunny : Outlook = Overcast : Outlook = Rainy : Expected information for attribute:

Computing information gain Information gain: information before splitting – information after splitting Information gain for attributes from weather data: gain(Outlook ) = info([9,5]) – info([2,3],[4,0],[3,2]) = 0.940 – 0.693 = 0.247 bits gain(Outlook ) = 0.247 bits gain(Temperature) = 0.029 bits gain(Humidity ) = 0.152 bits gain(Windy ) = 0.048 bits

Continuing to split gain(Temperature ) = 0.571 bits gain(Humidity ) = 0.971 bits gain(Windy ) = 0.020 bits

Final decision tree Note: not all leaves need to be pure; sometimes identical instances have different classes  Splitting stops when data can’t be split any further

Another example Own_house is the best choice for the root. 15

The final tree using this algorithm 16

Another example - information theory There are 7 coins of which one of them is heavier or lighter than the other. The rest are all of identical weight. All the coins look alike and we want to identify the odd coin by weighing on a scale by comparing the weights of one group of coins against another group. Consider the following options for the first weighing: (a) weighing coins 1, 2, 3 vs. 4, 5, 6 (b) weighing coins 1, 2 vs. 3, 4. What are the reductions in the entropies associated with the two options? 17

A lower-bound using information theory argument For the problem above, what is the minimum number of weighings needed to determine the odd coin? Number of bits of information needed to solve the problem ? What does each weighing do? By how much can the entropy be reduced? 18 18

Wish list for a purity measure Properties we require from a purity measure: When node is pure, measure should be zero When impurity is maximal (i.e. all classes equally likely), measure should be maximal Measure should obey multistage property (i.e. decisions can be made in several stages): Entropy is the only function that satisfies all three properties!

Properties of the entropy The multistage property: Simplification of computation: Note: instead of maximizing info gain we could just minimize information

Highly-branching attributes Problematic: attributes with a large number of values (extreme case: ID code) Subsets are more likely to be pure if there is a large number of values Information gain is biased towards choosing attributes with a large number of values This may result in overfitting (selection of an attribute that is non-optimal for prediction) Another problem: fragmentation

Weather data with ID code Outlook Temp. Humidity Windy Play A Sunny Hot High False No B True C Overcast Yes D Rainy Mild E Cool Normal F G H I J K L M N

Tree stump for ID code attribute Entropy of split: Information gain is maximal for ID code (namely 0.940 bits)

Gain ratio Gain ratio: a modification of the information gain that reduces its bias Gain ratio takes number and size of branches into account when choosing an attribute It corrects the information gain by taking the intrinsic information of a split into account Intrinsic information: entropy of distribution of instances into branches (i.e. how much info do we need to tell which branch an instance belongs to)

Computing the gain ratio Example: intrinsic information for ID code Value of attribute decreases as intrinsic information gets larger Definition of gain ratio: Example:

Gain ratios for weather data Outlook Temperature Info: 0.693 0.911 Gain: 0.940-0.693 0.247 Gain: 0.940-0.911 0.029 Split info: info([5,4,5]) 1.577 Split info: info([4,6,4]) 1.362 Gain ratio: 0.247/1.577 0.156 Gain ratio: 0.029/1.362 0.021 Humidity Windy Info: 0.788 0.892 Gain: 0.940-0.788 0.152 Gain: 0.940-0.892 0.048 Split info: info([7,7]) 1.000 Split info: info([8,6]) 0.985 Gain ratio: 0.152/1 Gain ratio: 0.048/0.985 0.049

More on the gain ratio “Outlook” still comes out top However: “ID code” has greater gain ratio Standard fix: ad hoc test to prevent splitting on that type of attribute Problem with gain ratio: it may overcompensate May choose an attribute just because its intrinsic information is very low Standard fix: only consider attributes with greater than average information gain

Discussion Top-down induction of decision trees: ID3, algorithm developed by Ross Quinlan Gain ratio just one modification of this basic algorithm  C4.5: deals with numeric attributes, missing values, noisy data Similar approach: CART There are many other attribute selection criteria! (But little difference in accuracy of result)

Covering algorithms Convert decision tree into a rule set Straightforward, but rule set overly complex More effective conversions are not trivial Instead, can generate rule set directly for each class in turn find rule set that covers all instances in it (excluding instances not in the class) Called a covering approach: at each stage a rule is identified that “covers” some of the instances

Example: generating a rule If true then class = a If x > 1.2 and y > 2.6 then class = a If x > 1.2 then class = a Possible rule set for class “b”: Could add more rules, get “perfect” rule set If x  1.2 then class = b If x > 1.2 and y  2.6 then class = b

Rules vs. trees Corresponding decision tree: (produces exactly the same predictions) But: rule sets can be more perspicuous when decision trees suffer from replicated subtrees Also: in multiclass situations, covering algorithm concentrates on one class at a time whereas decision tree learner takes all classes into account

Simple covering algorithm Generates a rule by adding tests that maximize rule’s accuracy Similar to situation in decision trees: problem of selecting an attribute to split on But: decision tree inducer maximizes overall purity Each new test reduces rule’s coverage:

Selecting a test Goal: maximize accuracy t total number of instances covered by rule p positive examples of the class covered by rule t – p number of errors made by rule Select test that maximizes the ratio p/t We are finished when p/t = 1 or the set of instances can’t be split any further

Example: contact lens data Rule we seek: Possible tests: If ? then recommendation = hard Age = Young 2/8 Age = Pre-presbyopic 1/8 Age = Presbyopic Spectacle prescription = Myope 3/12 Spectacle prescription = Hypermetrope 1/12 Astigmatism = no 0/12 Astigmatism = yes 4/12 Tear production rate = Reduced Tear production rate = Normal

Modified rule and resulting data If astigmatism = yes then recommendation = hard Rule with best test added: Instances covered by modified rule: Age Spectacle prescription Astigmatism Tear production rate Recommended lenses Young Myope Yes Reduced None Normal Hard Hypermetrope hard Pre-presbyopic Presbyopic

Further refinement Current state: Possible tests: If astigmatism = yes and ? then recommendation = hard Age = Young 2/4 Age = Pre-presbyopic 1/4 Age = Presbyopic Spectacle prescription = Myope 3/6 Spectacle prescription = Hypermetrope 1/6 Tear production rate = Reduced 0/6 Tear production rate = Normal 4/6

Modified rule and resulting data Rule with best test added: Instances covered by modified rule: If astigmatism = yes and tear production rate = normal then recommendation = hard Age Spectacle prescription Astigmatism Tear production rate Recommended lenses Young Myope Yes Normal Hard Hypermetrope hard Pre-presbyopic None Presbyopic

Further refinement Current state: Possible tests: Tie between the first and the fourth test We choose the one with greater coverage If astigmatism = yes and tear production rate = normal and ? then recommendation = hard Age = Young 2/2 Age = Pre-presbyopic 1/2 Age = Presbyopic Spectacle prescription = Myope 3/3 Spectacle prescription = Hypermetrope 1/3

The result Final rule: Second rule for recommending “hard lenses”: (built from instances not covered by first rule) These two rules cover all “hard lenses”: Process is repeated with other two classes If astigmatism = yes and tear production rate = normal and spectacle prescription = myope then recommendation = hard If age = young and astigmatism = yes and tear production rate = normal then recommendation = hard