Lecture 2: Data Mining.

Slides:



Advertisements
Similar presentations
Huffman code and ID3 Prof. Sin-Min Lee Department of Computer Science.
Advertisements

1 Data Mining Introductions What Is It? Cultures of Data Mining.
1 Data Mining Classification Techniques: Decision Trees (BUSINESS INTELLIGENCE) Slides prepared by Elizabeth Anglo, DISCS ADMU.
Introduction to Big Data & Basic Data Analysis. Big Data EveryWhere! Lots of data is being collected and warehoused – Web data, e-commerce – purchases.
Chapter 18: Data Analysis and Mining Kat Powell. Chapter 18: Data Analysis and Mining ➔ Decision Support Systems ➔ Data Analysis and OLAP ➔ Data Warehousing.
Data Mining Association Analysis: Basic Concepts and Algorithms
Association Analysis. Association Rule Mining: Definition Given a set of records each of which contain some number of items from a given collection; –Produce.
Data Mining Techniques So Far: Cluster analysis K-means Classification Decision Trees J48 (C4.5) Rule-based classification JRIP (RIPPER) Logistic Regression.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
CPSC-608 Database Systems Fall 2011 Instructor: Jianer Chen Office: HRBB 315C Phone: Notes #15.
4/3/01CS632 - Data Mining1 Data Mining Presented By: Kevin Seng.
1 CS Data Mining Introductions What Is It? Cultures of Data Mining.
Recommender systems Ram Akella February 23, 2011 Lecture 6b, i290 & 280I University of California at Berkeley Silicon Valley Center/SC.
1 CS Data Mining Introductions What Is It? Cultures of Data Mining.
Recommender systems Ram Akella November 26 th 2008.
Mining Association Rules
Classification.
Mining Association Rules
Data Mining – Intro.
CS157A Spring 05 Data Mining Professor Sin-Min Lee.
Introduction to Data Mining Data mining is a rapidly growing field of business analytics focused on better understanding of characteristics and.
Enterprise systems infrastructure and architecture DT211 4
Database System Concepts, 6 th Ed. ©Silberschatz, Korth and Sudarshan See for conditions on re-usewww.db-book.com Chapter 20: Data Analysis.
Database System Concepts - 6 th Edition20.1 Chapter 20: Data Analysis Decision Support Systems Data Warehousing Data Mining Classification Association.
Unsupervised Learning. CS583, Bing Liu, UIC 2 Supervised learning vs. unsupervised learning Supervised learning: discover patterns in the data that relate.
Copyright R. Weber Machine Learning, Data Mining ISYS370 Dr. R. Weber.
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 2.
Copyright: Silberschatz, Korth and Sudarshan 1 Data Mining.
Data Mining CS157B Fall 04 Professor Lee By Yanhua Xue.
1 Data Mining Lecture 3: Decision Trees. 2 Classification: Definition l Given a collection of records (training set ) –Each record contains a set of attributes,
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining By Tan, Steinbach, Kumar Lecture.
Modul 7: Association Analysis. 2 Association Rule Mining  Given a set of transactions, find rules that will predict the occurrence of an item based on.
Chapter 9 – Classification and Regression Trees
Data Mining By Fu-Chun (Tracy) Juang. What is Data Mining? ► The process of analyzing LARGE databases to find useful patterns. ► Attempts to discover.
Database System Concepts, 6 th Ed. ©Silberschatz, Korth and Sudarshan See for conditions on re-usewww.db-book.com Other Topics 2: Warehousing,
Computing & Information Sciences Kansas State University Friday. 30 Nov 2007CIS 560: Database System Concepts Lecture 39 of 42 Friday, 30 November 2007.
Data Mining – Intro. Course Overview Spatial Databases Temporal and Spatio-Temporal Databases Multimedia Databases Data Mining.
CS157B Fall 04 Introduction to Data Mining Chapter 22.3 Professor Lee Yu, Jianji (Joseph)
EXAM REVIEW MIS2502 Data Analytics. Exam What Tool to Use? Evaluating Decision Trees Association Rules Clustering.
CSE4334/5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai.
Database System Concepts, 6 th Ed. ©Silberschatz, Korth and Sudarshan See for conditions on re-usewww.db-book.com Chapter 20: Data Analysis.
Chapter 20 Data Analysis and Mining. 2 n Decision Support Systems  Obtain high-level information out of detailed information stored in (DB) transaction-processing.
Compiled By: Raj Gaurang Tiwari Assistant Professor SRMGPC, Lucknow Unsupervised Learning.
An Introduction Student Name: Riaz Ahmad Program: MSIT( ) Subject: Data warehouse & Data Mining.
1 Classification: predicts categorical class labels (discrete or nominal) classifies data (constructs a model) based on the training set and the values.
Data Mining By Farzana Forhad CS 157B. Agenda Decision Tree and ID3 Rough Set Theory Clustering.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Mining of Massive Datasets Edited based on Leskovec’s from
Monday, February 22,  The term analytics is often used interchangeably with:  Data science  Data mining  Knowledge discovery  Extracting useful.
Computing & Information Sciences Kansas State University Friday, 01 Dec 2006CIS 560: Database System Concepts Lecture 40 of 42 Friday, 01 December 2006.
1 Data Mining Lecture 6: Association Analysis. 2 Association Rule Mining l Given a set of transactions, find rules that will predict the occurrence of.
Dr. Chen, Data Mining  A/W & Dr. Chen, Data Mining Chapter 3 Basic Data Mining Techniques Jason C. H. Chen, Ph.D. Professor of MIS School of Business.
Introduction to Data Mining Clustering & Classification Reference: Tan et al: Introduction to data mining. Some slides are adopted from Tan et al.
Chapter 3 Data Mining: Classification & Association Chapter 4 in the text box Section: 4.3 (4.3.1),
Data Warehousing and Data Mining. Data Warehousing Data Mining Classification Association Rules Clustering.
Introduction to Data Mining Mining Association Rules Reference: Tan et al: Introduction to data mining. Some slides are adopted from Tan et al.
Chapter 20 Data Warehousing and Mining 1 st Semester, 2016 Sanghyun Park.
Chapter 20 Data Warehousing and Mining 1 st Semester, 2016 Sanghyun Park.
Book web site:
Data Mining – Intro.
Chapter 20: Data Analysis
Introduction C.Eng 714 Spring 2010.
Classification and Prediction
CPS216: Advanced Database Systems Data Mining
Data Analysis.
Association Rule Mining
I don’t need a title slide for a lecture
Chapter 20: Data Analysis
Presentation transcript:

Lecture 2: Data Mining

Roadmap What is data mining? Data Mining Tasks Classification/Decision Tree Clustering Association Mining Data Mining Algorithms Decision Tree Construction Frequent 2-itemsets Frequent Itemsets (Apriori) Clustering/Collaborative Filtering

What is Data Mining? Discovery of useful, possibly unexpected, patterns in data. Subsidiary issues: Data cleansing: detection of bogus data. E.g., age = 150. Visualization: something better than megabyte files of output. Warehousing of data (for retrieval).

Typical Kinds of Patterns Decision trees: succinct ways to classify by testing properties. Clusters: another succinct classification by similarity of properties. Bayes, hidden-Markov, and other statistical models, frequent-itemsets: expose important associations within data.

Example: Clusters x x xx x x x x x x x x x x x x x x x x x x x x x x

Applications (Among Many) Intelligence-gathering. Total Information Awareness. Web Analysis. PageRank. Marketing. Run a sale on diapers; raise the price of beer. Detective?

Cultures Databases: concentrate on large-scale (non-main-memory) data. AI (machine-learning): concentrate on complex methods, small data. Statistics: concentrate on inferring models.

Models vs. Analytic Processing To a database person, data-mining is a powerful form of analytic processing --- queries that examine large amounts of data. Result is the data that answers the query. To a statistician, data-mining is the inference of models. Result is the parameters of the model.

Meaningfulness of Answers A big risk when data mining is that you will “discover” patterns that are meaningless. Statisticians call it Bonferroni’s principle: (roughly) if you look in more places for interesting patterns than your amount of data will support, you are bound to find crap.

Examples A big objection to TIA was that it was looking for so many vague connections that it was sure to find things that were bogus and thus violate innocents’ privacy. The Rhine Paradox: a great example of how not to conduct scientific research.

Rhine Paradox --- (1) David Rhine was a parapsychologist in the 1950’s who hypothesized that some people had Extra-Sensory Perception. He devised an experiment where subjects were asked to guess 10 hidden cards --- red or blue. He discovered that almost 1 in 1000 had ESP --- they were able to get all 10 right!

Rhine Paradox --- (2) He told these people they had ESP and called them in for another test of the same type. Alas, he discovered that almost all of them had lost their ESP. What did he conclude? Answer on next slide.

Rhine Paradox --- (3) He concluded that you shouldn’t tell people they have ESP; it causes them to lose it.

Data Mining Tasks Data mining is the process of semi-automatically analyzing large databases to find useful patterns Prediction based on past history Predict if a credit card applicant poses a good credit risk, based on some attributes (income, job type, age, ..) and past history Predict if a pattern of phone calling card usage is likely to be fraudulent Some examples of prediction mechanisms: Classification Given a new item whose class is unknown, predict to which class it belongs Regression formulae Given a set of mappings for an unknown function, predict the function result for a new parameter value

Data Mining (Cont.) Descriptive Patterns Associations Find books that are often bought by “similar” customers. If a new such customer buys one such book, suggest the others too. Associations may be used as a first step in detecting causation E.g. association between exposure to chemical X and cancer, Clusters E.g. typhoid cases were clustered in an area surrounding a contaminated well Detection of clusters remains important in detecting epidemics

Decision Trees Example: Conducted survey to see what customers were interested in new model car Want to select customers for advertising campaign training set

One Possibility age<30 city=sf car=van likely unlikely likely

Another Possibility car=taurus city=sf age<45 likely unlikely

Issues Decision tree cannot be “too deep” would not have statistically significant amounts of data for lower decisions Need to select tree that most reliably predicts outcomes

Clustering income education age

Another Example: Text sports international news business Each document is a vector e.g., <100110...> contains words 1,4,5,... Clusters contain “similar” documents Useful for understanding, searching documents sports international news business

Issues Given desired number of clusters? Finding “best” clusters Are clusters semantically meaningful?

Association Rule Mining transaction id customer id products bought sales records: market-basket data Trend: Products p5, p8 often bough together Trend: Customer 12 likes product p9

Association Rule Rule: {p1, p3, p8} Support: number of baskets where these products appear High-support set: support  threshold s Problem: find all high support sets

Finding High-Support Pairs Baskets(basket, item) SELECT I.item, J.item, COUNT(I.basket) FROM Baskets I, Baskets J WHERE I.basket = J.basket AND I.item < J.item GROUP BY I.item, J.item HAVING COUNT(I.basket) >= s;

Example check if count  s

Issues Performance for size 2 rules even bigger! big Performance for size k rules

Roadmap What is data mining? Data Mining Tasks Classification/Decision Tree Clustering Association Mining Data Mining Algorithms Decision Tree Construction Frequent 2-itemsets Frequent Itemsets (Apriori) Clustering/Collaborative Filtering

Classification Rules Classification rules help assign new objects to classes. E.g., given a new automobile insurance applicant, should he or she be classified as low risk, medium risk or high risk? Classification rules for above example could use a variety of data, such as educational level, salary, age, etc.  person P, P.degree = masters and P.income > 75,000  P.credit = excellent  person P, P.degree = bachelors and (P.income  25,000 and P.income  75,000)  P.credit = good Rules are not necessarily exact: there may be some misclassifications Classification rules can be shown compactly as a decision tree.

Decision Tree

Decision Tree Construction Employed Root Yes No Class=Not Default Node Balance <50K >=50K Class=Yes Default Age Leaf <45 >=45 Class=Not Default Class=Yes Default

Construction of Decision Trees Training set: a data sample in which the classification is already known. Greedy top down generation of decision trees. Each internal node of the tree partitions the data into groups based on a partitioning attribute, and a partitioning condition for the node Leaf node: all (or most) of the items at the node belong to the same class, or all attributes have been considered, and no further partitioning is possible.

Finding the Best Split Point for Numerical Attributes The data comes from a IBM Quest synthetic dataset for function 0 Best Split Point In-core algorithms, such as C4.5, will just online sort the numerical attributes!

Best Splits Pick best attributes and conditions on which to partition The purity of a set S of training instances can be measured quantitatively in several ways. Notation: number of classes = k, number of instances = |S|, fraction of instances in class i = pi. The Gini measure of purity is defined as Gini (S) = 1 -  When all instances are in a single class, the Gini value is 0 It reaches its maximum (of 1 –1 /k) if each class the same number of instances. k i- 1 p2i

Best Splits (Cont.) Another measure of purity is the entropy measure, which is defined as entropy (S) = –  When a set S is split into multiple sets Si, I=1, 2, …, r, we can measure the purity of the resultant set of sets as: purity(S1, S2, ….., Sr) =  The information gain due to particular split of S into Si, i = 1, 2, …., r Information-gain (S, {S1, S2, …., Sr) = purity(S ) – purity (S1, S2, … Sr) k i- 1 pilog2 pi r i= 1 |Si| |S| purity (Si)

Finding Best Splits Categorical attributes (with no meaningful order): Multi-way split, one child for each value Binary split: try all possible breakup of values into two sets, and pick the best Continuous-valued attributes (can be sorted in a meaningful order) Binary split: Sort values, try each as a split point E.g. if values are 1, 10, 15, 25, split at 1,  10,  15 Pick the value that gives best split Multi-way split: A series of binary splits on the same attribute has roughly equivalent effect

Decision-Tree Construction Algorithm Procedure GrowTree (S ) Partition (S ); Procedure Partition (S) if ( purity (S ) > p or |S| < s ) then return; for each attribute A evaluate splits on attribute A; Use best split found (across all attributes) to partition S into S1, S2, …., Sr, for i = 1, 2, ….., r Partition (Si );

Finding Association Rules We are generally only interested in association rules with reasonably high support (e.g. support of 2% or greater) Naïve algorithm Consider all possible sets of relevant items. For each set find its support (i.e. count how many transactions purchase all items in the set). Large itemsets: sets with sufficiently high support

Example: Association Rules How do we perform rule mining efficiently? Observation: If set X has support t, then each X subset must have at least support t For 2-sets: if we need support s for {i, j} then each i, j must appear in at least s baskets

Algorithm for 2-Sets (1) Find OK products those appearing in s or more baskets (2) Find high-support pairs using only OK products

Algorithm for 2-Sets INSERT INTO okBaskets(basket, item) SELECT basket, item FROM Baskets GROUP BY item HAVING COUNT(basket) >= s; Perform mining on okBaskets SELECT I.item, J.item, COUNT(I.basket) FROM okBaskets I, okBaskets J WHERE I.basket = J.basket AND I.item < J.item GROUP BY I.item, J.item HAVING COUNT(I.basket) >= s;

Counting Efficiently One way: threshold = 3 sort count & remove

Counting Efficiently threshold = 3 scan & count remove keep counter Another way: threshold = 3 scan & count keep counter array in memory remove

Yet Another Way (1) threshold = 3 scan & hash & count (4) remove in-memory hash table threshold = 3 (4) remove (2) scan & remove (3) scan& count in-memory counters false positive

Discussion iceberg queries Hashing scheme: 2 (or 3) scans of data Sorting scheme: requires a sort! Hashing works well if few high-support pairs and many low-support ones item-pairs ranked by frequency frequency threshold iceberg queries

Frequent Itemsets Mining TID Transactions 100 { A, B, E } 200 { B, D } 300 400 { A, C } 500 { B, C } 600 700 { A, B } 800 { A, B, C, E } 900 { A, B, C } 1000 { A, C, E } Desired frequency 50% (support level) {A},{B},{C},{A,B}, {A,C} Down-closure (apriori) property If an itemset is frequent, all of its subset must also be frequent

Lattice for Enumerating Frequent Itemsets Found to be Infrequent Pruned supersets

Apriori L0 =  C1 = { 1-item subsets of all the transactions } For ( k=1; Ck  0; k++ ) { * support counting * } for all transactions t  D for all k-subsets s of t if k  Ck s.count++ { * candidates generation * } Lk = { c  Ck | c.count>= min sup } Ck+1 = apriori_gen( Lk ) Answer = UkLk

Clustering Clustering: Intuitively, finding clusters of points in the given data such that similar points lie in the same cluster Can be formalized using distance metrics in several ways Group points into k sets (for a given k) such that the average distance of points from the centroid of their assigned group is minimized Centroid: point defined by taking average of coordinates in each dimension. Another metric: minimize average distance between every pair of points in a cluster Has been studied extensively in statistics, but on small data sets Data mining systems aim at clustering techniques that can handle very large data sets E.g. the Birch clustering algorithm!

K-Means Clustering

K-means Clustering Partitional clustering approach Each cluster is associated with a centroid (center point) Each point is assigned to the cluster with the closest centroid Number of clusters, K, must be specified The basic algorithm is very simple

Hierarchical Clustering Example from biological classification (the word classification here does not mean a prediction mechanism) chordata mammalia reptilia leopards humans snakes crocodiles Other examples: Internet directory systems (e.g. Yahoo!) Agglomerative clustering algorithms Build small clusters, then cluster small clusters into bigger clusters, and so on Divisive clustering algorithms Start with all items in a single cluster, repeatedly refine (break) clusters into smaller ones

Collaborative Filtering Goal: predict what movies/books/… a person may be interested in, on the basis of Past preferences of the person Other people with similar past preferences The preferences of such people for a new movie/book/… One approach based on repeated clustering Cluster people on the basis of preferences for movies Then cluster movies on the basis of being liked by the same clusters of people Again cluster people based on their preferences for (the newly created clusters of) movies Repeat above till equilibrium Above problem is an instance of collaborative filtering, where users collaborate in the task of filtering information to find information of interest

Other Types of Mining Text mining: application of data mining to textual documents cluster Web pages to find related pages cluster pages a user has visited to organize their visit history classify Web pages automatically into a Web directory Data visualization systems help users examine large volumes of data and detect patterns visually Can visually encode large amounts of information on a single screen Humans are very good a detecting visual patterns

Data Streams What are Data Streams? Continuous streams Huge, Fast, and Changing Why Data Streams? The arriving speed of streams and the huge amount of data are beyond our capability to store them. “Real-time” processing Window Models Landscape window (Entire Data Stream) Sliding Window Damped Window Mining Data Stream

A Simple Problem P×(Nθ) ≤ N Finding frequent items Given a sequence (x1,…xN) where xi ∈[1,m], and a real number θ between zero and one. Looking for xi whose frequency > θ Naïve Algorithm (m counters) The number of frequent items ≤ 1/θ Problem: N>>m>>1/θ P×(Nθ) ≤ N

KRP algorithm ─ Karp, et. al (TODS’ 03) N=30 Θ=0.35 ⌈1/θ⌉ = 3 N/ (⌈1/θ⌉) ≤ Nθ

Enhance the Accuracy ⌈1/θ⌉ = 3 ε=0.5 Θ(1- ε )=0.175 ⌈1/(θ×ε)⌉ = 6 m=12 N=30 Deletion/Reducing = 9 NΘ>10 Θ=0.35 ⌈1/θ⌉ = 3 ε=0.5 Θ(1- ε )=0.175 ⌈1/(θ×ε)⌉ = 6 (ε*Θ)N (Θ-(ε*Θ))N

Frequent Items for Transactions with Fixed Length Each transaction has 2 items Θ=0.60 ⌈1/θ⌉×2= 4