Download presentation

Presentation is loading. Please wait.

Published byJude Cantley Modified over 2 years ago

1
Data Mining: Metode dan Algoritma Romi Satria Wahono

2
SD Sompok Semarang (1987) SMPN 8 Semarang (1990) SMA Taruna Nusantara, Magelang (1993) S1, S2 dan S3 (on-leave) Department of Computer Sciences Saitama University, Japan ( ) Research Interests: Software Engineering and Intelligent Systems Founder IlmuKomputer.Com Peneliti LIPI ( ) Founder dan CEO PT Brainmatics Cipta Informatika Romi Satria Wahono

3
Course Outline 1.Pengenalan Data Mining 2.Proses Data Mining 3.Evaluasi dan Validasi pada Data Mining 4.Metode dan Algoritma Data Mining 5.Penelitian Data Mining

4
Metode dan Algoritma

5
1.Inferring rudimentary rules 2.Statistical modeling 3.Constructing decision trees 4.Constructing rules 5.Association rule learning 6.Linear models 7.Instance-based learning 8.Clustering

6
Simplicity first Simple algorithms often work very well! There are many kinds of simple structure, eg: One attribute does all the work All attributes contribute equally & independently A weighted linear combination might do Instance-based: use a few prototypes Use simple logical rules Success of method depends on the domain

7
Inferring rudimentary rules 1R: learns a 1-level decision tree I.e., rules that all test one particular attribute Basic version One branch for each value Each branch assigns most frequent class Error rate: proportion of instances that don’t belong to the majority class of their corresponding branch Choose attribute with lowest error rate (assumes nominal attributes)

8
Pseudo-code for 1R Note: “missing” is treated as a separate attribute value For each attribute, For each value of the attribute, make a rule as follows: count how often each class appears find the most frequent class make the rule assign that class to this attribute-value Calculate the error rate of the rules Choose the rules with the smallest error rate

9
Evaluating the weather attributes 3/6 True No* 5/142/8 False Yes Windy 1/7 Normal Yes 4/143/7 High No Humidity 5/14 4/14 Total errors 1/4 Cool Yes 2/6 Mild Yes 2/4 Hot No* Temp 2/5 Rainy Yes 0/4 Overcast Yes 2/5 Sunny No Outlook ErrorsRulesAttribute NoTrueHighMildRainy YesFalseNormalHotOvercast YesTrueHighMildOvercast YesTrueNormalMildSunny YesFalseNormalMildRainy YesFalseNormalCoolSunny NoFalseHighMildSunny YesTrueNormalCoolOvercast NoTrueNormalCoolRainy YesFalseNormalCoolRainy YesFalseHighMildRainy YesFalseHighHotOvercast NoTrueHighHotSunny NoFalseHighHotSunny PlayWindyHumidityTempOutlook

10
Dealing with numeric attributes Discretize numeric attributes Divide each attribute’s range into intervals Sort instances according to attribute’s values Place breakpoints where class changes (majority class) This minimizes the total error Example: temperature from weather data Yes | No | Yes Yes Yes | No No Yes | Yes Yes | No | Yes Yes | No …………… YesFalse8075Rainy YesFalse8683Overcast NoTrue9080Sunny NoFalse85 Sunny PlayWindyHumidityTemperatureOutlook

11
The problem of overfitting This procedure is very sensitive to noise One instance with an incorrect class label will probably produce a separate interval Also: time stamp attribute will have zero errors Simple solution: enforce minimum number of instances in majority class per interval Example (with min = 3): Yes | No | Yes Yes Yes | No No Yes | Yes Yes | No | Yes Yes | No Yes No Yes Yes Yes | No No Yes Yes Yes | No Yes Yes No

12
With overfitting avoidance Resulting rule set: 0/1 > 95.5 Yes 3/6 True No* 5/142/8 False Yes Windy 2/6 > 82.5 and 95.5 No 3/141/7 82.5 Yes Humidity 5/14 4/14 Total errors 2/4 > 77.5 No* 3/10 77.5 Yes Temperature 2/5 Rainy Yes 0/4 Overcast Yes 2/5 Sunny No Outlook ErrorsRulesAttribute

13
Discussion of 1R 1R was described in a paper by Holte (1993) Contains an experimental evaluation on 16 datasets (using cross-validation so that results were representative of performance on future data) Minimum number of instances was set to 6 after some experimentation 1R’s simple rules performed not much worse than much more complex decision trees Simplicity first pays off! Very Simple Classification Rules Perform Well on Most Commonly Used Datasets Robert C. Holte, Computer Science Department, University of Ottawa

14
Discussion of 1R: Hyperpipes Another simple technique: build one rule for each class Each rule is a conjunction of tests, one for each attribute For numeric attributes: test checks whether instance's value is inside an interval Interval given by minimum and maximum observed in training data For nominal attributes: test checks whether value is one of a subset of attribute values Subset given by all possible values observed in training data Class with most matching tests is predicted

15
Statistical modeling “Opposite” of 1R: use all the attributes Two assumptions: Attributes are equally important statistically independent (given the class value) I.e., knowing the value of one attribute says nothing about the value of another (if the class is known) Independence assumption is never correct! But … this scheme works well in practice

16
Probabilities for weather data 5/ 14 5 No 9/ 14 9 Yes Play 3/5 2/5 3 2 No 3/9 6/9 3 6 Yes True False True False Windy 1/5 4/5 1 4 NoYesNoYesNoYes 6/9 3/9 6 3 Normal High Normal High Humidity 1/5 2/ /9 4/9 2/ Cool2/53/9Rainy Mild Hot Cool Mild Hot Temperature 0/54/9Overcast 3/52/9Sunny 23Rainy 04Overcast 32Sunny Outlook NoTrueHighMildRainy YesFalseNormalHotOvercast YesTrueHighMildOvercast YesTrueNormalMildSunny YesFalseNormalMildRainy YesFalseNormalCoolSunny NoFalseHighMildSunny YesTrueNormalCoolOvercast NoTrueNormalCoolRainy YesFalseNormalCoolRainy YesFalseHighMildRainy YesFalseHighHotOvercast NoTrueHighHotSunny NoFalseHighHotSunny PlayWindyHumidityTempOutlook

17
Probabilities for weather data ?TrueHighCoolSunny PlayWindyHumidityTemp.Outlook A new day: Likelihood of the two classes For “yes” = 2/9 3/9 3/9 3/9 9/14 = For “no” = 3/5 1/5 4/5 3/5 5/14 = Conversion into a probability by normalization: P(“yes”) = / ( ) = P(“no”) = / ( ) = 0.795

18
Bayes’s rule Probability of event H given evidence E: A priori probability of H : Probability of event before evidence is seen A posteriori probability of H : Probability of event after evidence is seen Thomas Bayes Born:1702 in London, England Died:1761 in Tunbridge Wells, Kent, England

19
Naïve Bayes for classification Classification learning: what’s the probability of the class given an instance? Evidence E = instance Event H = class value for instance Naïve assumption: evidence splits into parts (i.e. attributes) that are independent

20
Weather data example ?TrueHighCoolSunny PlayWindyHumidityTemp.Outlook Evidence E Probability of class “yes”

21
Cognitive Assignment II Pahami dan kuasai satu metode data mining dari berbagai literatur Rangkumkan dengan detail dalam bentuk slide, dari mulai definisi, perkembangan metode, tahapan algoritma, penerapannya untuk suatu studi kasus, dan buat/temukan code Java (eksekusi contoh kasus) Presentasikan di depan kelas pada mata kuliah berikutnya dengan bahasa manusia

22
Pilihan Algoritma atau Metode 1.Neural Network 2.Support Vector Machine 3.Naive Bayes 4.K-Nearest Neighbor 5.CART 6.Linear Discriminant Analysis 7.Agglomerative Clustering 8.Support Vector Regression 9.Expectation Maximization 10.C K-Means 12.Self-Organizing Map 13.FP-Growth 14.A Priori 15.Logistic Regression 16.Random Forest 17.K-Medoids 18.Radial Basis Function 19.Fuzzy C-Means 20.K* 21.Support Vector Clustering 22.OneR

23
Referensi 1.Ian H. Witten, Frank Eibe, Mark A. Hall, Data mining: Practical Machine Learning Tools and Techniques 3rd Edition, Elsevier, Daniel T. Larose, Discovering Knowledge in Data: an Introduction to Data Mining, John Wiley & Sons, Florin Gorunescu, Data Mining: Concepts, Models and Techniques, Springer, Jiawei Han and Micheline Kamber, Data Mining: Concepts and Techniques Second Edition, Elsevier, Oded Maimon and Lior Rokach, Data Mining and Knowledge Discovery Handbook Second Edition, Springer, Warren Liao and Evangelos Triantaphyllou (eds.), Recent Advances in Data Mining of Enterprise Data: Algorithms and Applications, World Scientific, 2007

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google