Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSE 5331/7331 F'2011 1 CSE 5331/7331 Fall 2011 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering.

Similar presentations


Presentation on theme: "CSE 5331/7331 F'2011 1 CSE 5331/7331 Fall 2011 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering."— Presentation transcript:

1 CSE 5331/7331 F'2011 1 CSE 5331/7331 Fall 2011 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering Southern Methodist University Slides extracted from Data Mining, Introductory and Advanced Topics, Prentice Hall, 2002.

2 2CSE 5331/7331 F'2011 Data Mining Outline PART I PART I –Introduction –Techniques PART II – Core Topics PART III – Related Topics

3 3CSE 5331/7331 F'2011 Introduction Outline Define data mining Define data mining Data mining vs. databases Data mining vs. databases Basic data mining tasks Basic data mining tasks Data mining development Data mining development Data mining issues Data mining issues Goal: Provide an overview of data mining.

4 4CSE 5331/7331 F'2011 Introduction Data is growing at a phenomenal rate Data is growing at a phenomenal rate Users expect more sophisticated information Users expect more sophisticated information How? How? UNCOVER HIDDEN INFORMATION DATA MINING

5 5CSE 5331/7331 F'2011 Data Mining Definition Finding hidden information in a database Finding hidden information in a database Fit data to a model Fit data to a model Similar terms Similar terms –Exploratory data analysis –Data driven discovery –Deductive learning

6 6CSE 5331/7331 F'2011 Data Mining Algorithm Objective: Fit Data to a Model Objective: Fit Data to a Model –Descriptive –Predictive Preference – Technique to choose the best model Preference – Technique to choose the best model Search – Technique to search the data Search – Technique to search the data –“Query”

7 7CSE 5331/7331 F'2011 Database Processing vs. Data Mining Processing Query Query –Well defined –SQL Query –Poorly defined –No precise query language Data Data – Operational data Output Output – Precise – Subset of database Data Data – Not operational data Output Output – Fuzzy – Not a subset of database

8 8CSE 5331/7331 F'2011 Query Examples Database Database Data Mining Data Mining – Find all customers who have purchased milk – Find all items which are frequently purchased with milk. (association rules) – Find all credit applicants with last name of Smith. – Identify customers who have purchased more than $10,000 in the last month. – Find all credit applicants who are poor credit risks. (classification) – Identify customers with similar buying habits. (Clustering)

9 9CSE 5331/7331 F'2011 Data Mining Models and Tasks

10 10CSE 5331/7331 F'2011 Basic Data Mining Tasks Classification maps data into predefined groups or classes Classification maps data into predefined groups or classes –Supervised learning –Pattern recognition –Prediction Regression is used to map a data item to a real valued prediction variable. Regression is used to map a data item to a real valued prediction variable. Clustering groups similar data together into clusters. Clustering groups similar data together into clusters. –Unsupervised learning –Segmentation –Partitioning

11 11CSE 5331/7331 F'2011 Basic Data Mining Tasks (cont’d) Summarization maps data into subsets with associated simple descriptions. Summarization maps data into subsets with associated simple descriptions. –Characterization –Generalization Link Analysis uncovers relationships among data. Link Analysis uncovers relationships among data. –Affinity Analysis –Association Rules –Sequential Analysis determines sequential patterns.

12 12CSE 5331/7331 F'2011 Ex: Time Series Analysis Example: Stock Market Example: Stock Market Predict future values Predict future values Determine similar patterns over time Determine similar patterns over time Classify behavior Classify behavior

13 13CSE 5331/7331 F'2011 Data Mining vs. KDD Knowledge Discovery in Databases (KDD): process of finding useful information and patterns in data. Knowledge Discovery in Databases (KDD): process of finding useful information and patterns in data. Data Mining: Use of algorithms to extract the information and patterns derived by the KDD process. Data Mining: Use of algorithms to extract the information and patterns derived by the KDD process.

14 14CSE 5331/7331 F'2011 KDD Process Selection: Obtain data from various sources. Selection: Obtain data from various sources. Preprocessing: Cleanse data. Preprocessing: Cleanse data. Transformation: Convert to common format. Transform to new format. Transformation: Convert to common format. Transform to new format. Data Mining: Obtain desired results. Data Mining: Obtain desired results. Interpretation/Evaluation: Present results to user in meaningful manner. Interpretation/Evaluation: Present results to user in meaningful manner. Modified from [FPSS96C]

15 15CSE 5331/7331 F'2011 KDD Process Ex: Web Log Selection: Selection: –Select log data (dates and locations) to use Preprocessing: Preprocessing: – Remove identifying URLs – Remove error logs Transformation: Transformation: –Sessionize (sort and group) Data Mining: Data Mining: –Identify and count patterns –Construct data structure Interpretation/Evaluation: Interpretation/Evaluation: –Identify and display frequently accessed sequences. Potential User Applications: Potential User Applications: –Cache prediction –Personalization

16 16CSE 5331/7331 F'2011 Data Mining Development Similarity Measures Hierarchical Clustering IR Systems Imprecise Queries Textual Data Web Search Engines Bayes Theorem Regression Analysis EM Algorithm K-Means Clustering Time Series Analysis Neural Networks Decision Tree Algorithms Algorithm Design Techniques Algorithm Analysis Data Structures Relational Data Model SQL Association Rule Algorithms Data Warehousing Scalability Techniques

17 17CSE 5331/7331 F'2011 KDD Issues Human Interaction Human Interaction Overfitting Overfitting Outliers Outliers Interpretation Interpretation Visualization Visualization Large Datasets Large Datasets High Dimensionality High Dimensionality

18 18CSE 5331/7331 F'2011 KDD Issues (cont’d) Multimedia Data Multimedia Data Missing Data Missing Data Irrelevant Data Irrelevant Data Noisy Data Noisy Data Changing Data Changing Data Integration Integration Application Application

19 19CSE 5331/7331 F'2011 Social Implications of DM Privacy Privacy Profiling Profiling Unauthorized use Unauthorized use

20 20CSE 5331/7331 F'2011 Data Mining Metrics Usefulness Usefulness Return on Investment (ROI) Return on Investment (ROI) Accuracy Accuracy Space/Time Space/Time

21 21CSE 5331/7331 F'2011 Visualization Techniques Graphical Graphical Geometric Geometric Icon-based Icon-based Pixel-based Pixel-based Hierarchical Hierarchical Hybrid Hybrid

22 22CSE 5331/7331 F'2011 Models Based on Summarization Visualization: Frequency distribution, mean, variance, median, mode, etc. Visualization: Frequency distribution, mean, variance, median, mode, etc. Box Plot: Box Plot:

23 23CSE 5331/7331 F'2011 Scatter Diagram

24 24CSE 5331/7331 F'2011 Data Mining Techniques Outline Statistical Statistical –Point Estimation –Models Based on Summarization –Bayes Theorem –Hypothesis Testing –Regression and Correlation Similarity Measures Similarity Measures Decision Trees Decision Trees Neural Networks Neural Networks –Activation Functions Genetic Algorithms Genetic Algorithms Goal: Provide an overview of basic data mining techniques

25 25CSE 5331/7331 F'2011 Point Estimation Point Estimate: estimate a population parameter. Point Estimate: estimate a population parameter. May be made by calculating the parameter for a sample. May be made by calculating the parameter for a sample. May be used to predict value for missing data. May be used to predict value for missing data. Ex: Ex: –R contains 100 employees –99 have salary information –Mean salary of these is $50,000 –Use $50,000 as value of remaining employee’s salary. Is this a good idea?

26 26CSE 5331/7331 F'2011 Estimation Error Bias: Difference between expected value and actual value. Bias: Difference between expected value and actual value. Mean Squared Error (MSE): expected value of the squared difference between the estimate and the actual value: Mean Squared Error (MSE): expected value of the squared difference between the estimate and the actual value: Why square? Why square? Root Mean Square Error (RMSE) Root Mean Square Error (RMSE)

27 27CSE 5331/7331 F'2011 Jackknife Estimate Jackknife Estimate: estimate of parameter is obtained by omitting one value from the set of observed values. Jackknife Estimate: estimate of parameter is obtained by omitting one value from the set of observed values. Ex: estimate of mean for X={x, …, x} Ex: estimate of mean for X={x 1, …, x n }

28 28CSE 5331/7331 F'2011 Maximum Likelihood Estimate (MLE) Obtain parameter estimates that maximize the probability that the sample data occurs for the specific model. Obtain parameter estimates that maximize the probability that the sample data occurs for the specific model. Joint probability for observing the sample data by multiplying the individual probabilities. Likelihood function: Joint probability for observing the sample data by multiplying the individual probabilities. Likelihood function: Maximize L. Maximize L.

29 29CSE 5331/7331 F'2011 MLE Example Coin toss five times: {H,H,H,H,T} Coin toss five times: {H,H,H,H,T} Assuming a perfect coin with H and T equally likely, the likelihood of this sequence is: Assuming a perfect coin with H and T equally likely, the likelihood of this sequence is: However if the probability of a H is 0.8 then: However if the probability of a H is 0.8 then:

30 30CSE 5331/7331 F'2011 MLE Example (cont’d) General likelihood formula: General likelihood formula: Estimate for p is then 4/5 = 0.8 Estimate for p is then 4/5 = 0.8

31 31CSE 5331/7331 F'2011 Expectation-Maximization (EM) Solves estimation with incomplete data. Solves estimation with incomplete data. Obtain initial estimates for parameters. Obtain initial estimates for parameters. Iteratively use estimates for missing data and continue until convergence. Iteratively use estimates for missing data and continue until convergence.

32 32CSE 5331/7331 F'2011 EM Example

33 33CSE 5331/7331 F'2011 EM Algorithm

34 34CSE 5331/7331 F'2011 Bayes Theorem Posterior Probability: P(h|x) Posterior Probability: P(h 1 |x i ) Prior Probability: P(h) Prior Probability: P(h 1 ) Bayes Theorem: Bayes Theorem: Assign probabilities of hypotheses given a data value. Assign probabilities of hypotheses given a data value.

35 35CSE 5331/7331 F'2011 Bayes Theorem Example Credit authorizations (hypotheses): h 1 =authorize purchase, h= authorize after further identification, h=do not authorize, h= do not authorize but contact police Credit authorizations (hypotheses): h 1 =authorize purchase, h 2 = authorize after further identification, h 3 =do not authorize, h 4 = do not authorize but contact police Assign twelve data values for all combinations of credit and income: Assign twelve data values for all combinations of credit and income: From training data: P(h 1 ) = 60%; P(h 2 )=20%; P(h 3 )=10%; P(h 4 )=10%. From training data: P(h 1 ) = 60%; P(h 2 )=20%; P(h 3 )=10%; P(h 4 )=10%.

36 36CSE 5331/7331 F'2011 Bayes Example(cont’d) Training Data: Training Data:

37 37CSE 5331/7331 F'2011 Bayes Example(cont’d) Calculate P(x i |h j ) and P(x i ) Calculate P(x i |h j ) and P(x i ) Ex: P(x 7 |h 1 )=2/6; P(x 4 |h 1 )=1/6; P(x 2 |h 1 )=2/6; P(x 8 |h 1 )=1/6; P(x i |h 1 )=0 for all other x i. Ex: P(x 7 |h 1 )=2/6; P(x 4 |h 1 )=1/6; P(x 2 |h 1 )=2/6; P(x 8 |h 1 )=1/6; P(x i |h 1 )=0 for all other x i. Predict the class for x 4 : Predict the class for x 4 : –Calculate P(h j |x 4 ) for all h j. –Place x 4 in class with largest value. –Ex: »P(h 1 |x 4 )=(P(x 4 |h 1 )(P(h 1 ))/P(x 4 ) =(1/6)(0.6)/0.1=1. =(1/6)(0.6)/0.1=1. »x 4 in class h 1.

38 38CSE 5331/7331 F'2011 Regression Predict future values based on past values Predict future values based on past values Linear Regression assumes linear relationship exists. Linear Regression assumes linear relationship exists. y = c 0 + c 1 x 1 + … + c n x n Find values to best fit the data Find values to best fit the data

39 39CSE 5331/7331 F'2011 Linear Regression

40 40CSE 5331/7331 F'2011 Correlation Examine the degree to which the values for two variables behave similarly. Examine the degree to which the values for two variables behave similarly. Correlation coefficient r: Correlation coefficient r: 1 = perfect correlation1 = perfect correlation -1 = perfect but opposite correlation-1 = perfect but opposite correlation 0 = no correlation0 = no correlation

41 41CSE 5331/7331 F'2011 Similarity Measures Determine similarity between two objects. Determine similarity between two objects. Similarity characteristics: Similarity characteristics: Alternatively, distance measure measure how unlike or dissimilar objects are. Alternatively, distance measure measure how unlike or dissimilar objects are.

42 42CSE 5331/7331 F'2011 Similarity Measures

43 43CSE 5331/7331 F'2011 Distance Measures Measure dissimilarity between objects Measure dissimilarity between objects

44 44CSE 5331/7331 F'2011 Twenty Questions Game

45 45CSE 5331/7331 F'2011 Decision Trees Decision Tree (DT): Decision Tree (DT): –Tree where the root and each internal node is labeled with a question. –The arcs represent each possible answer to the associated question. –Each leaf node represents a prediction of a solution to the problem. Popular technique for classification; Leaf node indicates class to which the corresponding tuple belongs. Popular technique for classification; Leaf node indicates class to which the corresponding tuple belongs.

46 46CSE 5331/7331 F'2011 Decision Tree Example

47 47CSE 5331/7331 F'2011 Decision Trees A Decision Tree Model is a computational model consisting of three parts: A Decision Tree Model is a computational model consisting of three parts: –Decision Tree –Algorithm to create the tree –Algorithm that applies the tree to data Creation of the tree is the most difficult part. Creation of the tree is the most difficult part. Processing is basically a search similar to that in a binary search tree (although DT may not be binary). Processing is basically a search similar to that in a binary search tree (although DT may not be binary).

48 48CSE 5331/7331 F'2011 Decision Tree Algorithm

49 49CSE 5331/7331 F'2011 DT Advantages/Disadvantages Advantages: Advantages: –Easy to understand. –Easy to generate rules Disadvantages: Disadvantages: –May suffer from overfitting. –Classifies by rectangular partitioning. –Does not easily handle nonnumeric data. –Can be quite large – pruning is necessary.

50 50CSE 5331/7331 F'2011 Neural Networks Based on observed functioning of human brain. Based on observed functioning of human brain. (Artificial Neural Networks (ANN) (Artificial Neural Networks (ANN) Our view of neural networks is very simplistic. Our view of neural networks is very simplistic. We view a neural network (NN) from a graphical viewpoint. We view a neural network (NN) from a graphical viewpoint. Alternatively, a NN may be viewed from the perspective of matrices. Alternatively, a NN may be viewed from the perspective of matrices. Used in pattern recognition, speech recognition, computer vision, and classification. Used in pattern recognition, speech recognition, computer vision, and classification.

51 51CSE 5331/7331 F'2011 Neural Networks Neural Network (NN) is a directed graph F= with vertices V={1,2,…,n} and arcs A={ |1 with vertices V={1,2,…,n} and arcs A={ |1<=i,j<=n}, with the following restrictions: –V is partitioned into a set of input nodes, V I, hidden nodes, V H, and output nodes, V O. –The vertices are also partitioned into layers –Any arc must have node i in layer h-1 and node j in layer h. –Arc is labeled with a numeric value w ij. –Node i is labeled with a function f i.

52 52CSE 5331/7331 F'2011 Neural Network Example

53 53CSE 5331/7331 F'2011 NN Node

54 54CSE 5331/7331 F'2011 NN Activation Functions Functions associated with nodes in graph. Functions associated with nodes in graph. Output may be in range [-1,1] or [0,1] Output may be in range [-1,1] or [0,1]

55 55CSE 5331/7331 F'2011 NN Activation Functions

56 56CSE 5331/7331 F'2011 NN Learning Propagate input values through graph. Propagate input values through graph. Compare output to desired output. Compare output to desired output. Adjust weights in graph accordingly. Adjust weights in graph accordingly.

57 57CSE 5331/7331 F'2011 Neural Networks A Neural Network Model is a computational model consisting of three parts: A Neural Network Model is a computational model consisting of three parts: –Neural Network graph –Learning algorithm that indicates how learning takes place. –Recall techniques that determine hew information is obtained from the network. We will look at propagation as the recall technique. We will look at propagation as the recall technique.

58 58CSE 5331/7331 F'2011 NN Advantages Learning Learning Can continue learning even after training set has been applied. Can continue learning even after training set has been applied. Easy parallelization Easy parallelization Solves many problems Solves many problems

59 59CSE 5331/7331 F'2011 NN Disadvantages Difficult to understand Difficult to understand May suffer from overfitting May suffer from overfitting Structure of graph must be determined a priori. Structure of graph must be determined a priori. Input values must be numeric. Input values must be numeric. Verification difficult. Verification difficult.

60 60CSE 5331/7331 F'2011 Genetic Algorithms Optimization search type algorithms. Optimization search type algorithms. Creates an initial feasible solution and iteratively creates new “better” solutions. Creates an initial feasible solution and iteratively creates new “better” solutions. Based on human evolution and survival of the fittest. Based on human evolution and survival of the fittest. Must represent a solution as an individual. Must represent a solution as an individual. Individual: string I=I 1,I 2,…,I n where I j is in given alphabet A. Individual: string I=I 1,I 2,…,I n where I j is in given alphabet A. Each character I j is called a gene. Each character I j is called a gene. Population: set of individuals. Population: set of individuals.

61 61CSE 5331/7331 F'2011 Genetic Algorithms A Genetic Algorithm (GA) is a computational model consisting of five parts: A Genetic Algorithm (GA) is a computational model consisting of five parts: –A starting set of individuals, P. –Crossover: technique to combine two parents to create offspring. –Mutation: randomly change an individual. –Fitness: determine the best individuals. –Algorithm which applies the crossover and mutation techniques to P iteratively using the fitness function to determine the best individuals in P to keep.

62 62CSE 5331/7331 F'2011 Crossover Examples

63 63CSE 5331/7331 F'2011 Genetic Algorithm

64 64CSE 5331/7331 F'2011 GA Advantages/Disadvantages Advantages Advantages –Easily parallelized Disadvantages Disadvantages –Difficult to understand and explain to end users. –Abstraction of the problem and method to represent individuals is quite difficult. –Determining fitness function is difficult. –Determining how to perform crossover and mutation is difficult.

65 65CSE 5331/7331 F'2011 Data Mining Outline PART I - Introduction PART II – Core Topics PART II – Core Topics –Classification –Clustering –Association Rules PART III – Related Topics

66 66CSE 5331/7331 F'2011 Classification Outline Classification Problem Overview Classification Problem Overview Classification Techniques Classification Techniques –Regression –Distance –Decision Trees –Rules –Neural Networks Goal: Provide an overview of the classification problem and introduce some of the basic algorithms

67 67CSE 5331/7331 F'2011 Classification Problem Given a database D={t 1,t 2,…,t n } and a set of classes C={C 1,…,C m }, the Classification Problem is to define a mapping f:D  C where each t i is assigned to one class. Given a database D={t 1,t 2,…,t n } and a set of classes C={C 1,…,C m }, the Classification Problem is to define a mapping f:D  C where each t i is assigned to one class. Actually divides D into equivalence classes. Actually divides D into equivalence classes. Prediction is similar, but may be viewed as having infinite number of classes. Prediction is similar, but may be viewed as having infinite number of classes.

68 68CSE 5331/7331 F'2011 Classification Examples Teachers classify students’ grades as A, B, C, D, or F. Teachers classify students’ grades as A, B, C, D, or F. Identify mushrooms as poisonous or edible. Identify mushrooms as poisonous or edible. Predict when a river will flood. Predict when a river will flood. Identify individuals with credit risks. Identify individuals with credit risks. Speech recognition Speech recognition Pattern recognition Pattern recognition

69 69CSE 5331/7331 F'2011 Classification Ex: Grading If x >= 90 then grade =A. If x >= 90 then grade =A. If 80<=x<90 then grade =B. If 80<=x<90 then grade =B. If 70<=x<80 then grade =C. If 70<=x<80 then grade =C. If 60<=x<70 then grade =D. If 60<=x<70 then grade =D. If x<50 then grade =F. If x<50 then grade =F. >=90<90 x >=80<80 x >=70<70 x F B A >=60<50 x C D

70 70CSE 5331/7331 F'2011 Classification Ex: Letter Recognition View letters as constructed from 5 components: Letter C Letter E Letter A Letter D Letter F Letter B

71 71CSE 5331/7331 F'2011 Classification Techniques Approach: Approach: 1.Create specific model by evaluating training data (or using domain experts’ knowledge). 2.Apply model developed to new data. Classes must be predefined Classes must be predefined Most common techniques use DTs, NNs, or are based on distances or statistical methods. Most common techniques use DTs, NNs, or are based on distances or statistical methods.

72 72CSE 5331/7331 F'2011 Defining Classes Partitioning Based Distance Based

73 73CSE 5331/7331 F'2011 Issues in Classification Missing Data Missing Data –Ignore –Replace with assumed value Measuring Performance Measuring Performance –Classification accuracy on test data –Confusion matrix –OC Curve

74 74CSE 5331/7331 F'2011 Height Example Data

75 75CSE 5331/7331 F'2011 Classification Performance True Positive True NegativeFalse Positive False Negative

76 76CSE 5331/7331 F'2011 Confusion Matrix Example Using height data example with Output1 correct and Output2 actual assignment

77 77CSE 5331/7331 F'2011 Operating Characteristic Curve

78 78CSE 5331/7331 F'2011 RegressionTopics Linear Regression Linear Regression Nonlinear Regression Nonlinear Regression Logistic Regression Logistic Regression Metrics Metrics

79 79CSE 5331/7331 F'2011 Remember High School? Y= mx + b Y= mx + b You need two points to determine a straight line. You need two points to determine a straight line. You need two points to find values for m and b. You need two points to find values for m and b. THIS IS REGRESSION

80 80CSE 5331/7331 F'2011 Regression Assume data fits a predefined function Assume data fits a predefined function Determine best values for regression coefficients c 0,c 1,…,c n. Determine best values for regression coefficients c 0,c 1,…,c n. Assume an error: y = c 0 +c 1 x 1 +…+c n x n Assume an error: y = c 0 +c 1 x 1 +…+c n x n +  Estimate error using mean squared error for training set:

81 81CSE 5331/7331 F'2011 Linear Regression Assume data fits a predefined function Assume data fits a predefined function Determine best values for regression coefficients c 0,c 1,…,c n. Determine best values for regression coefficients c 0,c 1,…,c n. Assume an error: y = c 0 +c 1 x 1 +…+c n x n Assume an error: y = c 0 +c 1 x 1 +…+c n x n +  Estimate error using mean squared error for training set:

82 82CSE 5331/7331 F'2011 Classification Using Linear Regression Division: Use regression function to divide area into regions. Division: Use regression function to divide area into regions. Prediction: Use regression function to predict a class membership function. Input includes desired class. Prediction: Use regression function to predict a class membership function. Input includes desired class.

83 83CSE 5331/7331 F'2011Division

84 84CSE 5331/7331 F'2011Prediction

85 85CSE 5331/7331 F'2011 Linear Regression Poor Fit Why use sum of least squares? http://curvefit.com/sum_of_squares.htm Linear doesn’t always work well

86 86CSE 5331/7331 F'2011 Nonlinear Regression Data does not nicely fit a straight line Data does not nicely fit a straight line Fit data to a curve Fit data to a curve Many possible functions Many possible functions Not as easy and straightforward as linear regression Not as easy and straightforward as linear regression How nonlinear regression works: How nonlinear regression works: http://curvefit.com/how_nonlin_works.htm

87 87CSE 5331/7331 F'2011 P-value The probability that a variable has a value greater than the observed value The probability that a variable has a value greater than the observed value http://en.wikipedia.org/wiki/P-value http://en.wikipedia.org/wiki/P-value http://en.wikipedia.org/wiki/P-value http://sportsci.org/resource/stats/pvalues.html http://sportsci.org/resource/stats/pvalues.html http://sportsci.org/resource/stats/pvalues.html

88 88CSE 5331/7331 F'2011 Covariance Degree to which two variables vary in the same manner Degree to which two variables vary in the same manner Correlation is normalized and covariance is not Correlation is normalized and covariance is not http://www.ds.unifi.it/VL/VL_EN/expect/expect3. html http://www.ds.unifi.it/VL/VL_EN/expect/expect3. html http://www.ds.unifi.it/VL/VL_EN/expect/expect3. html http://www.ds.unifi.it/VL/VL_EN/expect/expect3. html

89 89CSE 5331/7331 F'2011 Residual Error Error Difference between desired output and predicted output Difference between desired output and predicted output May actually use sum of squares May actually use sum of squares

90 90CSE 5331/7331 F'2011 Classification Using Distance Place items in class to which they are “closest”. Place items in class to which they are “closest”. Must determine distance between an item and a class. Must determine distance between an item and a class. Classes represented by Classes represented by –Centroid: Central value. –Medoid: Representative point. –Individual points Algorithm: KNN Algorithm: KNN

91 91CSE 5331/7331 F'2011 K Nearest Neighbor (KNN): Training set includes classes. Training set includes classes. Examine K items near item to be classified. Examine K items near item to be classified. New item placed in class with the most number of close items. New item placed in class with the most number of close items. O(q) for each tuple to be classified. (Here q is the size of the training set.) O(q) for each tuple to be classified. (Here q is the size of the training set.)

92 92CSE 5331/7331 F'2011 KNN

93 93CSE 5331/7331 F'2011 KNN Algorithm

94 94CSE 5331/7331 F'2011 Classification Using Decision Trees Partitioning based: Divide search space into rectangular regions. Partitioning based: Divide search space into rectangular regions. Tuple placed into class based on the region within which it falls. Tuple placed into class based on the region within which it falls. DT approaches differ in how the tree is built: DT Induction DT approaches differ in how the tree is built: DT Induction Internal nodes associated with attribute and arcs with values for that attribute. Internal nodes associated with attribute and arcs with values for that attribute. Algorithms: ID3, C4.5, CART Algorithms: ID3, C4.5, CART

95 95CSE 5331/7331 F'2011 Decision Tree Given: –D = {t 1, …, t n } where t i = –D = {t 1, …, t n } where t i = –Database schema contains {A 1, A 2, …, A h } –Classes C={C 1, …., C m } Decision or Classification Tree is a tree associated with D such that –Each internal node is labeled with attribute, A i –Each arc is labeled with predicate which can be applied to attribute at parent –Each leaf node is labeled with a class, C j

96 96CSE 5331/7331 F'2011 DT Induction

97 97CSE 5331/7331 F'2011 DT Splits Area Gender Height M F

98 98CSE 5331/7331 F'2011 Comparing DTs Balanced Deep

99 99CSE 5331/7331 F'2011 DT Issues Choosing Splitting Attributes Choosing Splitting Attributes Ordering of Splitting Attributes Ordering of Splitting Attributes Splits Splits Tree Structure Tree Structure Stopping Criteria Stopping Criteria Training Data Training Data Pruning Pruning

100 100CSE 5331/7331 F'2011 Decision Tree Induction is often based on Information Theory So

101 101CSE 5331/7331 F'2011 Information

102 102CSE 5331/7331 F'2011 DT Induction When all the marbles in the bowl are mixed up, little information is given. When all the marbles in the bowl are mixed up, little information is given. When the marbles in the bowl are all from one class and those in the other two classes are on either side, more information is given. When the marbles in the bowl are all from one class and those in the other two classes are on either side, more information is given. Use this approach with DT Induction !

103 103CSE 5331/7331 F'2011 Information/Entropy Given probabilitites p 1, p 2,.., p s whose sum is 1, Entropy is defined as: Given probabilitites p 1, p 2,.., p s whose sum is 1, Entropy is defined as: Entropy measures the amount of randomness or surprise or uncertainty. Entropy measures the amount of randomness or surprise or uncertainty. Goal in classification Goal in classification – no surprise – entropy = 0

104 104CSE 5331/7331 F'2011 Entropy log (1/p)H(p,1-p)

105 105CSE 5331/7331 F'2011 ID3 Creates tree using information theory concepts and tries to reduce expected number of comparison.. Creates tree using information theory concepts and tries to reduce expected number of comparison.. ID3 chooses split attribute with the highest information gain: ID3 chooses split attribute with the highest information gain:

106 106CSE 5331/7331 F'2011 ID3 Example (Output1) Starting state entropy: Starting state entropy: 4/15 log(15/4) + 8/15 log(15/8) + 3/15 log(15/3) = 0.4384 Gain using gender: Gain using gender: –Female: 3/9 log(9/3)+6/9 log(9/6)=0.2764 –Male: 1/6 (log 6/1) + 2/6 log(6/2) + 3/6 log(6/3) = 0.4392 –Weighted sum: (9/15)(0.2764) + (6/15)(0.4392) = 0.34152 –Gain: 0.4384 – 0.34152 = 0.09688 Gain using height: Gain using height: 0.4384 – (2/15)(0.301) = 0.3983 Choose height as first splitting attribute Choose height as first splitting attribute

107 107CSE 5331/7331 F'2011 C4.5 ID3 favors attributes with large number of divisions ID3 favors attributes with large number of divisions Improved version of ID3: Improved version of ID3: –Missing Data –Continuous Data –Pruning –Rules –GainRatio:

108 108CSE 5331/7331 F'2011 CART Create Binary Tree Create Binary Tree Uses entropy Uses entropy Formula to choose split point, s, for node t: Formula to choose split point, s, for node t: P L,P R probability that a tuple in the training set will be on the left or right side of the tree. P L,P R probability that a tuple in the training set will be on the left or right side of the tree.

109 109CSE 5331/7331 F'2011 CART Example At the start, there are six choices for split point (right branch on equality): At the start, there are six choices for split point (right branch on equality): –P(Gender)= 2(6/15)(9/15)(2/15 + 4/15 + 3/15)=0.224 –P(1.6) = 0 –P(1.7) = 2(2/15)(13/15)(0 + 8/15 + 3/15) = 0.169 –P(1.8) = 2(5/15)(10/15)(4/15 + 6/15 + 3/15) = 0.385 –P(1.9) = 2(9/15)(6/15)(4/15 + 2/15 + 3/15) = 0.256 –P(2.0) = 2(12/15)(3/15)(4/15 + 8/15 + 3/15) = 0.32 Split at 1.8 Split at 1.8

110 110CSE 5331/7331 F'2011 Classification Using Neural Networks Typical NN structure for classification: Typical NN structure for classification: –One output node per class –Output value is class membership function value Supervised learning Supervised learning For each tuple in training set, propagate it through NN. Adjust weights on edges to improve future classification. For each tuple in training set, propagate it through NN. Adjust weights on edges to improve future classification. Algorithms: Propagation, Backpropagation, Gradient Descent Algorithms: Propagation, Backpropagation, Gradient Descent

111 111CSE 5331/7331 F'2011 NN Issues Number of source nodes Number of source nodes Number of hidden layers Number of hidden layers Training data Training data Number of sinks Number of sinks Interconnections Interconnections Weights Weights Activation Functions Activation Functions Learning Technique Learning Technique When to stop learning When to stop learning

112 112CSE 5331/7331 F'2011 Decision Tree vs. Neural Network

113 113CSE 5331/7331 F'2011 Propagation Tuple Input Output

114 114CSE 5331/7331 F'2011 NN Propagation Algorithm

115 115CSE 5331/7331 F'2011 Example Propagation © Prentie Hall

116 116CSE 5331/7331 F'2011 NN Learning Adjust weights to perform better with the associated test data. Adjust weights to perform better with the associated test data. Supervised: Use feedback from knowledge of correct classification. Supervised: Use feedback from knowledge of correct classification. Unsupervised: No knowledge of correct classification needed. Unsupervised: No knowledge of correct classification needed.

117 117CSE 5331/7331 F'2011 NN Supervised Learning

118 118CSE 5331/7331 F'2011 Supervised Learning Possible error values assuming output from node i is y i but should be d i : Possible error values assuming output from node i is y i but should be d i : Change weights on arcs based on estimated error Change weights on arcs based on estimated error

119 119CSE 5331/7331 F'2011 NN Backpropagation Propagate changes to weights backward from output layer to input layer. Propagate changes to weights backward from output layer to input layer. Delta Rule:  w ij = c x ij (d j – y j ) Delta Rule:  w ij = c x ij (d j – y j ) Gradient Descent: technique to modify the weights in the graph. Gradient Descent: technique to modify the weights in the graph.

120 120CSE 5331/7331 F'2011 Backpropagation Error

121 121CSE 5331/7331 F'2011 Backpropagation Algorithm

122 122CSE 5331/7331 F'2011 Gradient Descent

123 123CSE 5331/7331 F'2011 Gradient Descent Algorithm

124 124CSE 5331/7331 F'2011 Output Layer Learning

125 125CSE 5331/7331 F'2011 Hidden Layer Learning

126 126CSE 5331/7331 F'2011 Types of NNs Different NN structures used for different problems. Different NN structures used for different problems. Perceptron Perceptron Self Organizing Feature Map Self Organizing Feature Map Radial Basis Function Network Radial Basis Function Network

127 127CSE 5331/7331 F'2011 Perceptron Perceptron is one of the simplest NNs. No hidden layers.

128 128CSE 5331/7331 F'2011 Perceptron Example Suppose: –Summation: S=3x 1 +2x 2 -6 –Activation: if S>0 then 1 else 0

129 129CSE 5331/7331 F'2011 Self Organizing Feature Map (SOFM) Competitive Unsupervised Learning Competitive Unsupervised Learning Observe how neurons work in brain: Observe how neurons work in brain: –Firing impacts firing of those near –Neurons far apart inhibit each other –Neurons have specific nonoverlapping tasks Ex: Kohonen Network Ex: Kohonen Network

130 130CSE 5331/7331 F'2011 Kohonen Network

131 131CSE 5331/7331 F'2011 Kohonen Network Competitive Layer – viewed as 2D grid Competitive Layer – viewed as 2D grid Similarity between competitive nodes and input nodes: Similarity between competitive nodes and input nodes: –Input: X = –Input: X = –Weights: –Weights: –Similarity defined based on dot product Competitive node most similar to input “wins” Competitive node most similar to input “wins” Winning node weights (as well as surrounding node weights) increased. Winning node weights (as well as surrounding node weights) increased.

132 132CSE 5331/7331 F'2011 Radial Basis Function Network RBF function has Gaussian shape RBF function has Gaussian shape RBF Networks RBF Networks –Three Layers –Hidden layer – Gaussian activation function –Output layer – Linear activation function

133 133CSE 5331/7331 F'2011 Radial Basis Function Network

134 134CSE 5331/7331 F'2011 Classification Using Rules Perform classification using If-Then rules Perform classification using If-Then rules Classification Rule: r = Classification Rule: r = Antecedent, Consequent May generate from from other techniques (DT, NN) or generate directly. May generate from from other techniques (DT, NN) or generate directly. Algorithms: Gen, RX, 1R, PRISM Algorithms: Gen, RX, 1R, PRISM

135 135CSE 5331/7331 F'2011 Generating Rules from DTs

136 136CSE 5331/7331 F'2011 Generating Rules Example

137 137CSE 5331/7331 F'2011 Generating Rules from NNs

138 138CSE 5331/7331 F'2011 1R Algorithm

139 139CSE 5331/7331 F'2011 1R Example

140 140CSE 5331/7331 F'2011 PRISM Algorithm

141 141CSE 5331/7331 F'2011 PRISM Example

142 142CSE 5331/7331 F'2011 Decision Tree vs. Rules Tree has implied order in which splitting is performed. Tree has implied order in which splitting is performed. Tree created based on looking at all classes. Tree created based on looking at all classes. Rules have no ordering of predicates. Only need to look at one class to generate its rules.

143 143CSE 5331/7331 F'2011 Clustering Outline Clustering Problem Overview Clustering Problem Overview Clustering Techniques Clustering Techniques –Hierarchical Algorithms –Partitional Algorithms –Genetic Algorithm –Clustering Large Databases Goal: Provide an overview of the clustering problem and introduce some of the basic algorithms

144 144CSE 5331/7331 F'2011 Clustering Examples Segment customer database based on similar buying patterns. Segment customer database based on similar buying patterns. Group houses in a town into neighborhoods based on similar features. Group houses in a town into neighborhoods based on similar features. Identify new plant species Identify new plant species Identify similar Web usage patterns Identify similar Web usage patterns

145 145CSE 5331/7331 F'2011 Clustering Example

146 146CSE 5331/7331 F'2011 Clustering Houses Size Based Geographic Distance Based

147 147CSE 5331/7331 F'2011 Clustering vs. Classification No prior knowledge No prior knowledge –Number of clusters –Meaning of clusters Unsupervised learning Unsupervised learning

148 148CSE 5331/7331 F'2011 Clustering Issues Outlier handling Outlier handling Dynamic data Dynamic data Interpreting results Interpreting results Evaluating results Evaluating results Number of clusters Number of clusters Data to be used Data to be used Scalability Scalability

149 149CSE 5331/7331 F'2011 Impact of Outliers on Clustering

150 150CSE 5331/7331 F'2011 Clustering Problem Given a database D={t 1,t 2,…,t n } of tuples and an integer value k, the Clustering Problem is to define a mapping f:D  {1,..,k} where each t i is assigned to one cluster K j, 1<=j<=k. Given a database D={t 1,t 2,…,t n } of tuples and an integer value k, the Clustering Problem is to define a mapping f:D  {1,..,k} where each t i is assigned to one cluster K j, 1<=j<=k. A Cluster, K j, contains precisely those tuples mapped to it. A Cluster, K j, contains precisely those tuples mapped to it. Unlike classification problem, clusters are not known a priori. Unlike classification problem, clusters are not known a priori.

151 151CSE 5331/7331 F'2011 Types of Clustering Hierarchical – Nested set of clusters created. Hierarchical – Nested set of clusters created. Partitional – One set of clusters created. Partitional – One set of clusters created. Incremental – Each element handled one at a time. Incremental – Each element handled one at a time. Simultaneous – All elements handled together. Simultaneous – All elements handled together. Overlapping/Non-overlapping Overlapping/Non-overlapping

152 152CSE 5331/7331 F'2011 Cluster Parameters

153 153CSE 5331/7331 F'2011 Distance Between Clusters Single Link: smallest distance between points Single Link: smallest distance between points Complete Link: largest distance between points Complete Link: largest distance between points Average Link: average distance between points Average Link: average distance between points Centroid: distance between centroids Centroid: distance between centroids

154 154CSE 5331/7331 F'2011 Hierarchical Clustering Clusters are created in levels actually creating sets of clusters at each level. Clusters are created in levels actually creating sets of clusters at each level. Agglomerative Agglomerative –Initially each item in its own cluster –Iteratively clusters are merged together –Bottom Up Divisive Divisive –Initially all items in one cluster –Large clusters are successively divided –Top Down

155 155CSE 5331/7331 F'2011 Hierarchical Algorithms Single Link Single Link MST Single Link MST Single Link Complete Link Complete Link Average Link Average Link

156 156CSE 5331/7331 F'2011 Dendrogram Dendrogram: a tree data structure which illustrates hierarchical clustering techniques. Dendrogram: a tree data structure which illustrates hierarchical clustering techniques. Each level shows clusters for that level. Each level shows clusters for that level. –Leaf – individual clusters –Root – one cluster A cluster at level i is the union of its children clusters at level i+1. A cluster at level i is the union of its children clusters at level i+1.

157 157CSE 5331/7331 F'2011 Levels of Clustering

158 158CSE 5331/7331 F'2011 Agglomerative Example ABCDE A01223 B10243 C22015 D24103 E33530 BA EC D 4 Threshold of 2351 ABCDE

159 159CSE 5331/7331 F'2011 MST Example ABCDE A01223 B10243 C22015 D24103 E33530 BA EC D

160 160CSE 5331/7331 F'2011 Agglomerative Algorithm

161 161CSE 5331/7331 F'2011 Single Link View all items with links (distances) between them. View all items with links (distances) between them. Finds maximal connected components in this graph. Finds maximal connected components in this graph. Two clusters are merged if there is at least one edge which connects them. Two clusters are merged if there is at least one edge which connects them. Uses threshold distances at each level. Uses threshold distances at each level. Could be agglomerative or divisive. Could be agglomerative or divisive.

162 162CSE 5331/7331 F'2011 MST Single Link Algorithm

163 163CSE 5331/7331 F'2011 Single Link Clustering

164 164CSE 5331/7331 F'2011 Partitional Clustering Nonhierarchical Nonhierarchical Creates clusters in one step as opposed to several steps. Creates clusters in one step as opposed to several steps. Since only one set of clusters is output, the user normally has to input the desired number of clusters, k. Since only one set of clusters is output, the user normally has to input the desired number of clusters, k. Usually deals with static sets. Usually deals with static sets.

165 165CSE 5331/7331 F'2011 Partitional Algorithms MST MST Squared Error Squared Error K-Means K-Means Nearest Neighbor Nearest Neighbor PAM PAM BEA BEA GA GA

166 166CSE 5331/7331 F'2011 MST Algorithm

167 167CSE 5331/7331 F'2011 Squared Error Minimized squared error Minimized squared error

168 168CSE 5331/7331 F'2011 Squared Error Algorithm

169 169CSE 5331/7331 F'2011 K-Means Initial set of clusters randomly chosen. Initial set of clusters randomly chosen. Iteratively, items are moved among sets of clusters until the desired set is reached. Iteratively, items are moved among sets of clusters until the desired set is reached. High degree of similarity among elements in a cluster is obtained. High degree of similarity among elements in a cluster is obtained. Given a cluster K i ={t i1,t i2,…,t im }, the cluster mean is m i = (1/m)(t i1 + … + t im ) Given a cluster K i ={t i1,t i2,…,t im }, the cluster mean is m i = (1/m)(t i1 + … + t im )

170 170CSE 5331/7331 F'2011 K-Means Example Given: {2,4,10,12,3,20,30,11,25}, k=2 Given: {2,4,10,12,3,20,30,11,25}, k=2 Randomly assign means: m 1 =3,m 2 =4 Randomly assign means: m 1 =3,m 2 =4 K 1 ={2,3}, K 2 ={4,10,12,20,30,11,25}, m 1 =2.5,m 2 =16 K 1 ={2,3}, K 2 ={4,10,12,20,30,11,25}, m 1 =2.5,m 2 =16 K 1 ={2,3,4},K 2 ={10,12,20,30,11,25}, m 1 =3,m 2 =18 K 1 ={2,3,4},K 2 ={10,12,20,30,11,25}, m 1 =3,m 2 =18 K 1 ={2,3,4,10},K 2 ={12,20,30,11,25}, m 1 =4.75,m 2 =19.6 K 1 ={2,3,4,10},K 2 ={12,20,30,11,25}, m 1 =4.75,m 2 =19.6 K 1 ={2,3,4,10,11,12},K 2 ={20,30,25}, m 1 =7,m 2 =25 K 1 ={2,3,4,10,11,12},K 2 ={20,30,25}, m 1 =7,m 2 =25 Stop as the clusters with these means are the same. Stop as the clusters with these means are the same.

171 171CSE 5331/7331 F'2011 K-Means Algorithm

172 172CSE 5331/7331 F'2011 Nearest Neighbor Items are iteratively merged into the existing clusters that are closest. Items are iteratively merged into the existing clusters that are closest. Incremental Incremental Threshold, t, used to determine if items are added to existing clusters or a new cluster is created. Threshold, t, used to determine if items are added to existing clusters or a new cluster is created.

173 173CSE 5331/7331 F'2011 Nearest Neighbor Algorithm

174 174CSE 5331/7331 F'2011 PAM Partitioning Around Medoids (PAM) (K-Medoids) Partitioning Around Medoids (PAM) (K-Medoids) Handles outliers well. Handles outliers well. Ordering of input does not impact results. Ordering of input does not impact results. Does not scale well. Does not scale well. Each cluster represented by one item, called the medoid. Each cluster represented by one item, called the medoid. Initial set of k medoids randomly chosen. Initial set of k medoids randomly chosen.

175 175CSE 5331/7331 F'2011PAM

176 176CSE 5331/7331 F'2011 PAM Cost Calculation At each step in algorithm, medoids are changed if the overall cost is improved. At each step in algorithm, medoids are changed if the overall cost is improved. C jih – cost change for an item t j associated with swapping medoid t i with non-medoid t h. C jih – cost change for an item t j associated with swapping medoid t i with non-medoid t h.

177 177CSE 5331/7331 F'2011 PAM Algorithm

178 178CSE 5331/7331 F'2011 BEA Bond Energy Algorithm Bond Energy Algorithm Database design (physical and logical) Database design (physical and logical) Vertical fragmentation Vertical fragmentation Determine affinity (bond) between attributes based on common usage. Determine affinity (bond) between attributes based on common usage. Algorithm outline: Algorithm outline: 1.Create affinity matrix 2.Convert to BOND matrix 3.Create regions of close bonding

179 179CSE 5331/7331 F'2011BEA Modified from [OV99]

180 180CSE 5331/7331 F'2011 Genetic Algorithm Example { A,B,C,D,E,F,G,H} { A,B,C,D,E,F,G,H} Randomly choose initial solution: Randomly choose initial solution: {A,C,E} {B,F} {D,G,H} or 10101000, 01000100, 00010011 Suppose crossover at point four and choose 1 st and 3 rd individuals: Suppose crossover at point four and choose 1 st and 3 rd individuals: 10100011, 01000100, 00011000 What should termination criteria be? What should termination criteria be?

181 181CSE 5331/7331 F'2011 GA Algorithm

182 182CSE 5331/7331 F'2011 Clustering Large Databases Most clustering algorithms assume a large data structure which is memory resident. Most clustering algorithms assume a large data structure which is memory resident. Clustering may be performed first on a sample of the database then applied to the entire database. Clustering may be performed first on a sample of the database then applied to the entire database. Algorithms Algorithms –BIRCH –DBSCAN –CURE

183 183CSE 5331/7331 F'2011 Desired Features for Large Databases One scan (or less) of DB One scan (or less) of DB Online Online Suspendable, stoppable, resumable Suspendable, stoppable, resumable Incremental Incremental Work with limited main memory Work with limited main memory Different techniques to scan (e.g. sampling) Different techniques to scan (e.g. sampling) Process each tuple once Process each tuple once

184 184CSE 5331/7331 F'2011 BIRCH Balanced Iterative Reducing and Clustering using Hierarchies Balanced Iterative Reducing and Clustering using Hierarchies Incremental, hierarchical, one scan Incremental, hierarchical, one scan Save clustering information in a tree Save clustering information in a tree Each entry in the tree contains information about one cluster Each entry in the tree contains information about one cluster New nodes inserted in closest entry in tree New nodes inserted in closest entry in tree

185 185CSE 5331/7331 F'2011 Clustering Feature CT Triple: (N,LS,SS) CT Triple: (N,LS,SS) –N: Number of points in cluster –LS: Sum of points in the cluster –SS: Sum of squares of points in the cluster CF Tree CF Tree –Balanced search tree –Node has CF triple for each child –Leaf node represents cluster and has CF value for each subcluster in it. –Subcluster has maximum diameter

186 186CSE 5331/7331 F'2011 BIRCH Algorithm

187 187CSE 5331/7331 F'2011 Improve Clusters

188 188CSE 5331/7331 F'2011 DBSCAN Density Based Spatial Clustering of Applications with Noise Density Based Spatial Clustering of Applications with Noise Outliers will not effect creation of cluster. Outliers will not effect creation of cluster. Input Input –MinPts – minimum number of points in cluster –Eps – for each point in cluster there must be another point in it less than this distance away.

189 189CSE 5331/7331 F'2011 DBSCAN Density Concepts Eps-neighborhood: Points within Eps distance of a point. Eps-neighborhood: Points within Eps distance of a point. Core point: Eps-neighborhood dense enough (MinPts) Core point: Eps-neighborhood dense enough (MinPts) Directly density-reachable: A point p is directly density-reachable from a point q if the distance is small (Eps) and q is a core point. Directly density-reachable: A point p is directly density-reachable from a point q if the distance is small (Eps) and q is a core point. Density-reachable: A point si density- reachable form another point if there is a path from one to the other consisting of only core points. Density-reachable: A point si density- reachable form another point if there is a path from one to the other consisting of only core points.

190 190CSE 5331/7331 F'2011 Density Concepts

191 191CSE 5331/7331 F'2011 DBSCAN Algorithm

192 192CSE 5331/7331 F'2011 CURE Clustering Using Representatives Clustering Using Representatives Use many points to represent a cluster instead of only one Use many points to represent a cluster instead of only one Points will be well scattered Points will be well scattered

193 193CSE 5331/7331 F'2011 CURE Approach

194 194CSE 5331/7331 F'2011 CURE Algorithm

195 195CSE 5331/7331 F'2011 CURE for Large Databases

196 196CSE 5331/7331 F'2011 Comparison of Clustering Techniques

197 197CSE 5331/7331 F'2011 Association Rules Outline Provide an overview of basic Association Rule mining techniques Goal: Provide an overview of basic Association Rule mining techniques Association Rules Problem Overview Association Rules Problem Overview –Large itemsets Association Rules Algorithms Association Rules Algorithms –Apriori –Sampling –Partitioning –Parallel Algorithms Comparing Techniques Comparing Techniques Incremental Algorithms Incremental Algorithms Advanced AR Techniques Advanced AR Techniques

198 198CSE 5331/7331 F'2011 Example: Market Basket Data Items frequently purchased together: Items frequently purchased together: Bread  PeanutButter Uses: Uses: –Placement –Advertising –Sales –Coupons Objective: increase sales and reduce costs Objective: increase sales and reduce costs

199 199CSE 5331/7331 F'2011 Association Rule Definitions Set of items: I={I 1,I 2,…,I m } Set of items: I={I 1,I 2,…,I m } Transactions: D={t 1,t 2, …, t n }, t j  I Transactions: D={t 1,t 2, …, t n }, t j  I Itemset: {I i1,I i2, …, I ik }  I Itemset: {I i1,I i2, …, I ik }  I Support of an itemset: Percentage of transactions which contain that itemset. Support of an itemset: Percentage of transactions which contain that itemset. Large (Frequent) itemset: Itemset whose number of occurrences is above a threshold. Large (Frequent) itemset: Itemset whose number of occurrences is above a threshold.

200 200CSE 5331/7331 F'2011 Association Rules Example I = { Beer, Bread, Jelly, Milk, PeanutButter} Support of {Bread,PeanutButter} is 60%

201 201CSE 5331/7331 F'2011 Association Rule Definitions Association Rule (AR): implication X  Y where X,Y  I and X  Y = ; Association Rule (AR): implication X  Y where X,Y  I and X  Y = ; Support of AR (s) X  Y: Percentage of transactions that contain X  Y Support of AR (s) X  Y: Percentage of transactions that contain X  Y Confidence of AR (  ) X  Y: Ratio of number of transactions that contain X  Y to the number that contain X Confidence of AR (  ) X  Y: Ratio of number of transactions that contain X  Y to the number that contain X

202 202CSE 5331/7331 F'2011 Association Rules Ex (cont’d)

203 203CSE 5331/7331 F'2011 Association Rule Problem Given a set of items I={I 1,I 2,…,I m } and a database of transactions D={t 1,t 2, …, t n } where t i ={I i1,I i2, …, I ik } and I ij  I, the Association Rule Problem is to identify all association rules X  Y with a minimum support and confidence. Given a set of items I={I 1,I 2,…,I m } and a database of transactions D={t 1,t 2, …, t n } where t i ={I i1,I i2, …, I ik } and I ij  I, the Association Rule Problem is to identify all association rules X  Y with a minimum support and confidence. Link Analysis Link Analysis NOTE: Support of X  Y is same as support of X  Y. NOTE: Support of X  Y is same as support of X  Y.

204 204CSE 5331/7331 F'2011 Association Rule Techniques 1. Find Large Itemsets. 2. Generate rules from frequent itemsets.

205 205CSE 5331/7331 F'2011 Algorithm to Generate ARs

206 206CSE 5331/7331 F'2011 Apriori Large Itemset Property: Large Itemset Property: Any subset of a large itemset is large. Contrapositive: Contrapositive: If an itemset is not large, none of its supersets are large.

207 207CSE 5331/7331 F'2011 Large Itemset Property

208 208CSE 5331/7331 F'2011 Apriori Ex (cont’d) s=30%  = 50%

209 209CSE 5331/7331 F'2011 Apriori Algorithm 1. C 1 = Itemsets of size one in I; 2. Determine all large itemsets of size 1, L 1; 3. 3. i = 1; 4. Repeat 5. i = i + 1; 6. C i = Apriori-Gen(L i-1 ); 7. Count C i to determine L i; 8. until no more large itemsets found;

210 210CSE 5331/7331 F'2011 Apriori-Gen Generate candidates of size i+1 from large itemsets of size i. Generate candidates of size i+1 from large itemsets of size i. Approach used: join large itemsets of size i if they agree on i-1 Approach used: join large itemsets of size i if they agree on i-1 May also prune candidates who have subsets that are not large. May also prune candidates who have subsets that are not large.

211 211CSE 5331/7331 F'2011 Apriori-Gen Example

212 212CSE 5331/7331 F'2011 Apriori-Gen Example (cont’d)

213 213CSE 5331/7331 F'2011 Apriori Adv/Disadv Advantages: Advantages: –Uses large itemset property. –Easily parallelized –Easy to implement. Disadvantages: Disadvantages: –Assumes transaction database is memory resident. –Requires up to m database scans.

214 214CSE 5331/7331 F'2011 Sampling Large databases Large databases Sample the database and apply Apriori to the sample. Sample the database and apply Apriori to the sample. Potentially Large Itemsets (PL): Large itemsets from sample Potentially Large Itemsets (PL): Large itemsets from sample Negative Border (BD - ): Negative Border (BD - ): –Generalization of Apriori-Gen applied to itemsets of varying sizes. –Minimal set of itemsets which are not in PL, but whose subsets are all in PL.

215 215CSE 5331/7331 F'2011 Negative Border Example PL PL  BD - (PL)

216 216CSE 5331/7331 F'2011 Sampling Algorithm 1. D s = sample of Database D; 2. PL = Large itemsets in D s using smalls; 3. C = PL  BD - (PL); 4. Count C in Database using s; 5. ML = large itemsets in BD - (PL); 6. If ML =  then done 7. else C = repeated application of BD -; 8. Count C in Database;

217 217CSE 5331/7331 F'2011 Sampling Example Find AR assuming s = 20% Find AR assuming s = 20% D s = { t 1,t 2 } D s = { t 1,t 2 } Smalls = 10% Smalls = 10% PL = {{Bread}, {Jelly}, {PeanutButter}, {Bread,Jelly}, {Bread,PeanutButter}, {Jelly, PeanutButter}, {Bread,Jelly,PeanutButter}} PL = {{Bread}, {Jelly}, {PeanutButter}, {Bread,Jelly}, {Bread,PeanutButter}, {Jelly, PeanutButter}, {Bread,Jelly,PeanutButter}} BD - (PL)={{Beer},{Milk}} BD - (PL)={{Beer},{Milk}} ML = {{Beer}, {Milk}} ML = {{Beer}, {Milk}} Repeated application of BD - generates all remaining itemsets Repeated application of BD - generates all remaining itemsets

218 218CSE 5331/7331 F'2011 Sampling Adv/Disadv Advantages: Advantages: –Reduces number of database scans to one in the best case and two in worst. –Scales better. Disadvantages: Disadvantages: –Potentially large number of candidates in second pass

219 219CSE 5331/7331 F'2011 Partitioning Divide database into partitions D 1,D 2,…,D p Divide database into partitions D 1,D 2,…,D p Apply Apriori to each partition Apply Apriori to each partition Any large itemset must be large in at least one partition. Any large itemset must be large in at least one partition.

220 220CSE 5331/7331 F'2011 Partitioning Algorithm 1. Divide D into partitions D 1,D 2,…,D p; 2. 2. For I = 1 to p do 3. L i = Apriori(D i ); 4. C = L 1  …  L p ; 5. Count C on D to generate L;

221 221CSE 5331/7331 F'2011 Partitioning Example D1D1 D2D2 S=10% {Bread}, {Jelly}, {PeanutButter}, {Bread,Jelly}, {Bread,PeanutButter}, {Jelly, PeanutButter}, {Bread,Jelly,PeanutButter}} L 1 ={{Bread}, {Jelly}, {PeanutButter}, {Bread,Jelly}, {Bread,PeanutButter}, {Jelly, PeanutButter}, {Bread,Jelly,PeanutButter}} {Bread}, {Milk}, {PeanutButter}, {Bread,Milk}, {Bread,PeanutButter}, {Milk, PeanutButter}, {Bread,Milk,PeanutButter}, {Beer}, {Beer,Bread}, {Beer,Milk}} L 2 ={{Bread}, {Milk}, {PeanutButter}, {Bread,Milk}, {Bread,PeanutButter}, {Milk, PeanutButter}, {Bread,Milk,PeanutButter}, {Beer}, {Beer,Bread}, {Beer,Milk}}

222 222CSE 5331/7331 F'2011 Partitioning Adv/Disadv Advantages: Advantages: –Adapts to available main memory –Easily parallelized –Maximum number of database scans is two. Disadvantages: Disadvantages: –May have many candidates during second scan.

223 223CSE 5331/7331 F'2011 Parallelizing AR Algorithms Based on Apriori Based on Apriori Techniques differ: Techniques differ: – What is counted at each site – How data (transactions) are distributed Data Parallelism Data Parallelism –Data partitioned –Count Distribution Algorithm Task Parallelism Task Parallelism –Data and candidates partitioned –Data Distribution Algorithm

224 224CSE 5331/7331 F'2011 Count Distribution Algorithm(CDA) Count Distribution Algorithm(CDA) 1. Place data partition at each site. 2. In Parallel at each site do 3. C 1 = Itemsets of size one in I; 4. Count C 1; 5. Broadcast counts to all sites; 6. Determine global large itemsets of size 1, L 1 ; 7. 7. i = 1; 8. Repeat 9. i = i + 1; 10. C i = Apriori-Gen(L i-1 ); 11. Count C i; 12. Broadcast counts to all sites; 13. Determine global large itemsets of size i, L i ; 14. until no more large itemsets found;

225 225CSE 5331/7331 F'2011 CDA Example

226 226CSE 5331/7331 F'2011 Data Distribution Algorithm(DDA) Data Distribution Algorithm(DDA) 1. Place data partition at each site. 2. In Parallel at each site do 3. Determine local candidates of size 1 to count; 4. Broadcast local transactions to other sites; 5. Count local candidates of size 1 on all data; 6. Determine large itemsets of size 1 for local candidates; 7. Broadcast large itemsets to all sites; 8. Determine L 1 ; 9. 9. i = 1; 10. Repeat 11. i = i + 1; 12. C i = Apriori-Gen(L i-1 ); 13. Determine local candidates of size i to count; 14. Count, broadcast, and find L i ; 15. until no more large itemsets found;

227 227CSE 5331/7331 F'2011 DDA Example

228 228CSE 5331/7331 F'2011 Comparing AR Techniques Target Target Type Type Data Type Data Type Data Source Data Source Technique Technique Itemset Strategy and Data Structure Itemset Strategy and Data Structure Transaction Strategy and Data Structure Transaction Strategy and Data Structure Optimization Optimization Architecture Architecture Parallelism Strategy Parallelism Strategy

229 229CSE 5331/7331 F'2011 Comparison of AR Techniques

230 230CSE 5331/7331 F'2011 Hash Tree

231 231CSE 5331/7331 F'2011 Incremental Association Rules Generate ARs in a dynamic database. Generate ARs in a dynamic database. Problem: algorithms assume static database Problem: algorithms assume static database Objective: Objective: –Know large itemsets for D –Find large itemsets for D  {  D} Must be large in either D or  D Must be large in either D or  D Save L i and counts Save L i and counts

232 232CSE 5331/7331 F'2011 Note on ARs Many applications outside market basket data analysis Many applications outside market basket data analysis –Prediction (telecom switch failure) –Web usage mining Many different types of association rules Many different types of association rules –Temporal –Spatial –Causal

233 233CSE 5331/7331 F'2011 Advanced AR Techniques Generalized Association Rules Generalized Association Rules Multiple-Level Association Rules Multiple-Level Association Rules Quantitative Association Rules Quantitative Association Rules Using multiple minimum supports Using multiple minimum supports Correlation Rules Correlation Rules

234 234CSE 5331/7331 F'2011 Measuring Quality of Rules Support Support Confidence Confidence Interest Interest Conviction Conviction Chi Squared Test Chi Squared Test

235 235CSE 5331/7331 F'2011 Data Mining Outline PART I - Introduction PART II – Core Topics – –Classification – –Clustering – –Association Rules PART III – Related Topics PART III – Related Topics

236 236CSE 5331/7331 F'2011 Related Topics Outline Database/OLTP Systems Database/OLTP Systems Fuzzy Sets and Logic Fuzzy Sets and Logic Information Retrieval(Web Search Engines) Information Retrieval(Web Search Engines) Dimensional Modeling Dimensional Modeling Data Warehousing Data Warehousing OLAP/DSS OLAP/DSS Statistics Statistics Machine Learning Machine Learning Pattern Matching Pattern Matching Goal: Examine some areas which are related to data mining.

237 237CSE 5331/7331 F'2011 DB & OLTP Systems Schema Schema –(ID,Name,Address,Salary,JobNo) Data Model Data Model –ER –Relational Transaction Transaction Query: Query: SELECT Name FROM T WHERE Salary > 100000 DM: Only imprecise queries

238 CSE 5331/7331 F'2011 238 Fuzzy Sets Outline Introduction/Overview Material for these slides obtained from: Data Mining Introductory and Advanced Topics by Margaret H. Dunham http://www.engr.smu.edu/~mhd/book Introduction to “Type-2 Fuzzy Logic” by Jenny Carter Introduction to “Type-2 Fuzzy Logic” by Jenny Carter http://www.cse.dmu.ac.uk/~jennyc/

239 239CSE 5331/7331 F'2011 Fuzzy Sets and Logic Fuzzy Set: Set membership function is a real valued function with output in the range [0,1]. Fuzzy Set: Set membership function is a real valued function with output in the range [0,1]. f(x): Probability x is in F. f(x): Probability x is in F. 1-f(x): Probability x is not in F. 1-f(x): Probability x is not in F. EX: EX: –T = {x | x is a person and x is tall} –Let f(x) be the probability that x is tall –Here f is the membership function DM: Prediction and classification are fuzzy.

240 240CSE 5331/7331 F'2011 Fuzzy Sets and Logic Fuzzy Set: Set membership function is a real valued function with output in the range [0,1]. Fuzzy Set: Set membership function is a real valued function with output in the range [0,1]. f(x): Probability x is in F. f(x): Probability x is in F. 1-f(x): Probability x is not in F. 1-f(x): Probability x is not in F. EX: EX: –T = {x | x is a person and x is tall} –Let f(x) be the probability that x is tall –Here f is the membership function

241 241CSE 5331/7331 F'2011 Fuzzy Sets

242 242CSE 5331/7331 F'2011 IR is Fuzzy SimpleFuzzy Accept Reject

243 243CSE 5331/7331 F'2011 Fuzzy Set Theory A fuzzy subset A of U is characterized by a membership function  (A,u) : U  [0,1] which associates with each element u of U a number  (u) in the interval [0,1] A fuzzy subset A of U is characterized by a membership function  (A,u) : U  [0,1] which associates with each element u of U a number  (u) in the interval [0,1] Definition Definition –Let A and B be two fuzzy subsets of U. Also, let ¬A be the complement of A. Then, »  (¬A,u) = 1 -  (A,u) »  (A  B,u) = max(  (A,u),  (B,u)) »  (A  B,u) = min(  (A,u),  (B,u))

244 244CSE 5331/7331 F'2011 The world is imprecise. Mathematical and Statistical techniques often unsatisfactory. Mathematical and Statistical techniques often unsatisfactory. –Experts make decisions with imprecise data in an uncertain world. –They work with knowledge that is rarely defined mathematically or algorithmically but uses vague terminology with words. Fuzzy logic is able to use vagueness to achieve a precise answer. By considering shades of grey and all factors simultaneously, you get a better answer, one that is more suited to the situation. Fuzzy logic is able to use vagueness to achieve a precise answer. By considering shades of grey and all factors simultaneously, you get a better answer, one that is more suited to the situation. © Jenny Carter

245 245CSE 5331/7331 F'2011 Fuzzy Logic then... is particularly good at handling uncertainty, vagueness and imprecision. is particularly good at handling uncertainty, vagueness and imprecision. especially useful where a problem can be described linguistically (using words). especially useful where a problem can be described linguistically (using words). Applications include: Applications include: –robotics –washing machine control –nuclear reactors –focusing a camcorder –information retrieval –train scheduling © Jenny Carter

246 246CSE 5331/7331 F'2011 Crisp Sets Different heights have same ‘tallness’ Different heights have same ‘tallness’ © Jenny Carter

247 247CSE 5331/7331 F'2011 Fuzzy Sets The shape you see is known as the membership function The shape you see is known as the membership function © Jenny Carter

248 248CSE 5331/7331 F'2011 Fuzzy Sets Shows two membership functions: ‘tall’ and ‘short’ © Jenny Carter

249 249CSE 5331/7331 F'2011Notation For the member, x, of a discrete set with membership µ we use the notation µ/x. In other words, x is a member of the set to degree µ. Discrete sets are written as: A = µ 1 /x 1 + µ 2 /x 2 +.......... + µ n /x n Or where x 1, x 2....x n are members of the set A and µ 1, µ 2,...., µ n are their degrees of membership. A continuous fuzzy set A is written as: © Jenny Carter

250 250CSE 5331/7331 F'2011 Fuzzy Sets The members of a fuzzy set are members to some degree, known as a membership grade or degree of membership. The members of a fuzzy set are members to some degree, known as a membership grade or degree of membership. The membership grade is the degree of belonging to the fuzzy set. The larger the number (in [0,1]) the more the degree of belonging. (N.B. This is not a probability) The membership grade is the degree of belonging to the fuzzy set. The larger the number (in [0,1]) the more the degree of belonging. (N.B. This is not a probability) The translation from x to µ A (x) is known as fuzzification. The translation from x to µ A (x) is known as fuzzification. A fuzzy set is either continuous or discrete. A fuzzy set is either continuous or discrete. Graphical representation of membership functions is very useful. Graphical representation of membership functions is very useful. © Jenny Carter

251 251CSE 5331/7331 F'2011 Fuzzy Sets - Example Again, notice the overlapping of the sets reflecting the real world more accurately than if we were using a traditional approach. © Jenny Carter

252 252CSE 5331/7331 F'2011 Rules Rules often of the form: Rules often of the form: IF x is A THEN y is B where A and B are fuzzy sets defined on the universes of discourse X and Y respectively. –if pressure is high then volume is small; –if a tomato is red then a tomato is ripe. where high, small, red and ripe are fuzzy sets. © Jenny Carter

253 CSE 5331/7331 F'2011 253 Information Retrieval Outline Introduction/Overview Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza-Yates and Berthier Ribeiro-Neto http://www.sims.berkeley.edu/~hearst/irbook/ Modern Information Retrieval by Ricardo Baeza-Yates and Berthier Ribeiro-Neto http://www.sims.berkeley.edu/~hearst/irbook/ http://www.sims.berkeley.edu/~hearst/irbook/ Data Mining Introductory and Advanced Topics by Margaret H. Dunham Data Mining Introductory and Advanced Topics by Margaret H. Dunham http://www.engr.smu.edu/~mhd/book

254 254CSE 5331/7331 F'2011 Information Retrieval Information Retrieval (IR): retrieving desired information from textual data. Information Retrieval (IR): retrieving desired information from textual data. Library Science Library Science Digital Libraries Digital Libraries Web Search Engines Web Search Engines Traditionally keyword based Traditionally keyword based Sample query: Sample query: Find all documents about “data mining”. DM: Similarity measures; Mine text/Web data. Mine text/Web data.

255 255CSE 5331/7331 F'2011 Information Retrieval Information Retrieval (IR): retrieving desired information from textual data. Information Retrieval (IR): retrieving desired information from textual data. Library Science Library Science Digital Libraries Digital Libraries Web Search Engines Web Search Engines Traditionally keyword based Traditionally keyword based Sample query: Sample query: Find all documents about “data mining”.

256 256CSE 5331/7331 F'2011 DB vs IR Records (tuples) vs. documents Records (tuples) vs. documents Well defined results vs. fuzzy results Well defined results vs. fuzzy results DB grew out of files and traditional business systesm DB grew out of files and traditional business systesm IR grew out of library science and need to categorize/group/access books/articles IR grew out of library science and need to categorize/group/access books/articles

257 257CSE 5331/7331 F'2011 DB vs IR (cont’d)  Data retrieval  which docs contain a set of keywords?  Well defined semantics  a single erroneous object implies failure!  Information retrieval  information about a subject or topic  semantics is frequently loose  small errors are tolerated  IR system:  interpret contents of information items  generate a ranking which reflects relevance  notion of relevance is most important

258 258CSE 5331/7331 F'2011 Motivation  IR in the last 20 years:  classification and categorization  systems and languages  user interfaces and visualization  Still, area was seen as of narrow interest  Advent of the Web changed this perception once and for all  universal repository of knowledge  free (low cost) universal access  no central editorial board  many problems though: IR seen as key to finding the solutions!

259 259CSE 5331/7331 F'2011 Basic Concepts Logical view of the documents Document representation viewed as a continuum: logical view of docs might shift structure Accents spacing stopwords Noun groups stemming Manual indexing Docs structureFull textIndex terms Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

260 260CSE 5331/7331 F'2011 User Interface Text Operations Query Operations Indexing Searching Ranking Index Text query user need user feedback ranked docs retrieved docs logical view inverted file DB Manager Module Text Database Text The Retrieval Process Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

261 261CSE 5331/7331 F'2011 Information Retrieval Similarity: measure of how close a query is to a document. Similarity: measure of how close a query is to a document. Documents which are “close enough” are retrieved. Documents which are “close enough” are retrieved. Metrics: Metrics: –Precision = |Relevant and Retrieved| |Retrieved| |Retrieved| –Recall = |Relevant and Retrieved| |Relevant| |Relevant|

262 262CSE 5331/7331 F'2011 Indexing IR systems usually adopt index terms to process queries IR systems usually adopt index terms to process queries Index term: Index term: –a keyword or group of selected words –any word (more general) Stemming might be used: Stemming might be used: –connect: connecting, connection, connections An inverted file is built for the chosen index terms An inverted file is built for the chosen index terms Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

263 263CSE 5331/7331 F'2011Indexing Docs Information Need Index Terms doc query Ranking match Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

264 264CSE 5331/7331 F'2011 Inverted Files There are two main elements: There are two main elements: –vocabulary – set of unique terms –Occurrences – where those terms appear The occurrences can be recorded as terms or byte offsets The occurrences can be recorded as terms or byte offsets Using term offset is good to retrieve concepts such as proximity, whereas byte offsets allow direct access Using term offset is good to retrieve concepts such as proximity, whereas byte offsets allow direct access Vocabulary Occurrences (byte offset) …… Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

265 265CSE 5331/7331 F'2011 Inverted Files The number of indexed terms is often several orders of magnitude smaller when compared to the documents size (Mbs vs Gbs) The number of indexed terms is often several orders of magnitude smaller when compared to the documents size (Mbs vs Gbs) The space consumed by the occurrence list is not trivial. Each time the term appears it must be added to a list in the inverted file The space consumed by the occurrence list is not trivial. Each time the term appears it must be added to a list in the inverted file That may lead to a quite considerable index overhead That may lead to a quite considerable index overhead Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

266 266CSE 5331/7331 F'2011 Example Text: Text: Inverted file Inverted file 1 6 12 16 18 25 29 36 40 45 54 58 66 70 That house has a garden. The garden has many flowers. The flowers are beautiful beautiful flowers garden house 70 45, 58 18, 29 6 VocabularyOccurrences Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

267 267CSE 5331/7331 F'2011 Ranking A ranking is an ordering of the documents retrieved that (hopefully) reflects the relevance of the documents to the query A ranking is an ordering of the documents retrieved that (hopefully) reflects the relevance of the documents to the query A ranking is based on fundamental premisses regarding the notion of relevance, such as: A ranking is based on fundamental premisses regarding the notion of relevance, such as: –common sets of index terms –sharing of weighted terms –likelihood of relevance Each set of premisses leads to a distinct IR model Each set of premisses leads to a distinct IR model Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

268 268CSE 5331/7331 F'2011 Classic IR Models - Basic Concepts Each document represented by a set of representative keywords or index terms Each document represented by a set of representative keywords or index terms An index term is a document word useful for remembering the document main themes An index term is a document word useful for remembering the document main themes Usually, index terms are nouns because nouns have meaning by themselves Usually, index terms are nouns because nouns have meaning by themselves However, search engines assume that all words are index terms (full text representation) However, search engines assume that all words are index terms (full text representation) Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

269 269CSE 5331/7331 F'2011 Classic IR Models - Basic Concepts The importance of the index terms is represented by weights associated to them The importance of the index terms is represented by weights associated to them k i - an index term k i - an index term d j - a document d j - a document w ij - a weight associated with (k i,d j ) w ij - a weight associated with (k i,d j ) The weight w ij quantifies the importance of the index term for describing the document contents The weight w ij quantifies the importance of the index term for describing the document contents Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

270 270CSE 5331/7331 F'2011 Classic IR Models - Basic Concepts –t is the total number of index terms –K = {k 1, k 2, …, k t } is the set of all index terms –w ij >= 0 is a weight associated with (k i,d j ) –w ij = 0 indicates that term does not belong to doc –d j = (w 1j, w 2j, …, w tj ) is a weighted vector associated with the document d j –g i (d j ) = w ij is a function which returns the weight associated with pair (k i,d j ) Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

271 271CSE 5331/7331 F'2011 The Boolean Model Simple model based on set theory Simple model based on set theory Queries specified as boolean expressions Queries specified as boolean expressions –precise semantics and neat formalism Terms are either present or absent. Thus, w ij  {0,1} Terms are either present or absent. Thus, w ij  {0,1} Consider Consider –q = k a  (k b   k c ) –q dnf = (1,1,1)  (1,1,0)  (1,0,0) –q cc = (1,1,0) is a conjunctive component Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

272 272CSE 5331/7331 F'2011 The Vector Model Use of binary weights is too limiting Use of binary weights is too limiting Non-binary weights provide consideration for partial matches Non-binary weights provide consideration for partial matches These term weights are used to compute a degree of similarity between a query and each document These term weights are used to compute a degree of similarity between a query and each document Ranked set of documents provides for better matching Ranked set of documents provides for better matching Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

273 273CSE 5331/7331 F'2011 The Vector Model w ij > 0 whenever k i appears in d j w ij > 0 whenever k i appears in d j w iq >= 0 associated with the pair (k i,q) w iq >= 0 associated with the pair (k i,q) d j = (w 1j, w 2j,..., w tj ) d j = (w 1j, w 2j,..., w tj ) q = (w 1q, w 2q,..., w tq ) q = (w 1q, w 2q,..., w tq ) To each term k i is associated a unitary vector i To each term k i is associated a unitary vector i The unitary vectors i and j are assumed to be orthonormal (i.e., index terms are assumed to occur independently within the documents) The unitary vectors i and j are assumed to be orthonormal (i.e., index terms are assumed to occur independently within the documents) The t unitary vectors i form an orthonormal basis for a t-dimensional space where queries and documents are represented as weighted vectors The t unitary vectors i form an orthonormal basis for a t-dimensional space where queries and documents are represented as weighted vectors Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

274 274CSE 5331/7331 F'2011 Query Languages Keyword Based Keyword Based Boolean Boolean Weighted Boolean Weighted Boolean Context Based (Phrasal & Proximity) Context Based (Phrasal & Proximity) Pattern Matching Pattern Matching Structural Queries Structural Queries Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

275 275CSE 5331/7331 F'2011 Keyword Based Queries Basic Queries Basic Queries –Single word –Multiple words Context Queries Context Queries –Phrase –Proximity Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

276 276CSE 5331/7331 F'2011 Boolean Queries Keywords combined with Boolean operators: Keywords combined with Boolean operators: –OR: (e 1 OR e 2 ) –AND: (e 1 AND e 2 ) –BUT: (e 1 BUT e 2 ) Satisfy e 1 but not e 2 Negation only allowed using BUT to allow efficient use of inverted index by filtering another efficiently retrievable set. Negation only allowed using BUT to allow efficient use of inverted index by filtering another efficiently retrievable set. Naïve users have trouble with Boolean logic. Naïve users have trouble with Boolean logic. Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

277 277CSE 5331/7331 F'2011 Boolean Retrieval with Inverted Indices Primitive keyword: Retrieve containing documents using the inverted index. Primitive keyword: Retrieve containing documents using the inverted index. OR: Recursively retrieve e 1 and e 2 and take union of results. OR: Recursively retrieve e 1 and e 2 and take union of results. AND: Recursively retrieve e 1 and e 2 and take intersection of results. AND: Recursively retrieve e 1 and e 2 and take intersection of results. BUT: Recursively retrieve e 1 and e 2 and take set difference of results. BUT: Recursively retrieve e 1 and e 2 and take set difference of results. Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

278 278CSE 5331/7331 F'2011 Phrasal Queries Retrieve documents with a specific phrase (ordered list of contiguous words) Retrieve documents with a specific phrase (ordered list of contiguous words) –“information theory” May allow intervening stop words and/or stemming. May allow intervening stop words and/or stemming. –“buy camera” matches: “buy a camera” “buying the cameras” etc. Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

279 279CSE 5331/7331 F'2011 Phrasal Retrieval with Inverted Indices Must have an inverted index that also stores positions of each keyword in a document. Must have an inverted index that also stores positions of each keyword in a document. Retrieve documents and positions for each individual word, intersect documents, and then finally check for ordered contiguity of keyword positions. Retrieve documents and positions for each individual word, intersect documents, and then finally check for ordered contiguity of keyword positions. Best to start contiguity check with the least common word in the phrase. Best to start contiguity check with the least common word in the phrase. Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

280 280CSE 5331/7331 F'2011 Proximity Queries List of words with specific maximal distance constraints between terms. List of words with specific maximal distance constraints between terms. Example: “dogs” and “race” within 4 words match “…dogs will begin the race…” Example: “dogs” and “race” within 4 words match “…dogs will begin the race…” May also perform stemming and/or not count stop words. May also perform stemming and/or not count stop words. Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

281 281CSE 5331/7331 F'2011 Pattern Matching Allow queries that match strings rather than word tokens. Allow queries that match strings rather than word tokens. Requires more sophisticated data structures and algorithms than inverted indices to retrieve efficiently. Requires more sophisticated data structures and algorithms than inverted indices to retrieve efficiently. Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

282 282CSE 5331/7331 F'2011 Simple Patterns Prefixes: Pattern that matches start of word. Prefixes: Pattern that matches start of word. –“anti” matches “antiquity”, “antibody”, etc. Suffixes: Pattern that matches end of word: Suffixes: Pattern that matches end of word: –“ix” matches “fix”, “matrix”, etc. Substrings: Pattern that matches arbitrary subsequence of characters. Substrings: Pattern that matches arbitrary subsequence of characters. – “rapt” matches “enrapture”, “velociraptor” etc. Ranges: Pair of strings that matches any word lexicographically (alphabetically) between them. Ranges: Pair of strings that matches any word lexicographically (alphabetically) between them. –“tin” to “tix” matches “tip”, “tire”, “title”, etc. Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto

283 283CSE 5331/7331 F'2011 IR Query Result Measures and Classification IRClassification

284 284CSE 5331/7331 F'2011 Dimensional Modeling View data in a hierarchical manner more as business executives might View data in a hierarchical manner more as business executives might Useful in decision support systems and mining Useful in decision support systems and mining Dimension: collection of logically related attributes; axis for modeling data. Dimension: collection of logically related attributes; axis for modeling data. Facts: data stored Facts: data stored Ex: Dimensions – products, locations, date Ex: Dimensions – products, locations, date Facts – quantity, unit price Facts – quantity, unit price DM: May view data as dimensional.

285 285CSE 5331/7331 F'2011 Dimensional Modeling View data in a hierarchical manner more as business executives might View data in a hierarchical manner more as business executives might Useful in decision support systems and mining Useful in decision support systems and mining Dimension: collection of logically related attributes; axis for modeling data. Dimension: collection of logically related attributes; axis for modeling data. Facts: data stored Facts: data stored Ex: Dimensions – products, locations, date Ex: Dimensions – products, locations, date Facts – quantity, unit price Facts – quantity, unit price

286 286CSE 5331/7331 F'2011 Aggregation Hierarchies

287 287CSE 5331/7331 F'2011 Multidimensional Schemas Star Schema shows facts and dimensions Star Schema shows facts and dimensions –Center of the star has facts shown in fact tables –Outside of the facts, each diemnsion is shown separately in dimension tables –Access to fact table from dimension table via join SELECT Quantity, Price FROM Facts, Location Where (Facts.LocationID = Location.LocationID) and (Location.City = ‘Dallas’) –View as relations, problem volume of data and indexing

288 288CSE 5331/7331 F'2011 Star Schema

289 289CSE 5331/7331 F'2011 Flattened Star

290 290CSE 5331/7331 F'2011 Normalized Star

291 291CSE 5331/7331 F'2011 Snowflake Schema

292 292CSE 5331/7331 F'2011 OLAP Online Analytic Processing (OLAP): provides more complex queries than OLTP. Online Analytic Processing (OLAP): provides more complex queries than OLTP. OnLine Transaction Processing (OLTP): traditional database/transaction processing. OnLine Transaction Processing (OLTP): traditional database/transaction processing. Dimensional data; cube view Dimensional data; cube view Visualization of operations: Visualization of operations: –Slice: examine sub-cube. –Dice: rotate cube to look at another dimension. –Roll Up/Drill Down DM: May use OLAP queries.

293 293CSE 5331/7331 F'2011 OLAP Introduction OLAP by Example OLAP by Example http://perso.orange.fr/bernard.lupin/englis h/index.htm http://perso.orange.fr/bernard.lupin/englis h/index.htm What is OLAP? What is OLAP? http://www.olapreport.com/fasmi.htm

294 294CSE 5331/7331 F'2011 OLAP Online Analytic Processing (OLAP): provides more complex queries than OLTP. Online Analytic Processing (OLAP): provides more complex queries than OLTP. OnLine Transaction Processing (OLTP): traditional database/transaction processing. OnLine Transaction Processing (OLTP): traditional database/transaction processing. Dimensional data; cube view Dimensional data; cube view Support ad hoc querying Support ad hoc querying Require analysis of data Require analysis of data Can be thought of as an extension of some of the basic aggregation functions available in SQL Can be thought of as an extension of some of the basic aggregation functions available in SQL OLAP tools may be used in DSS systems OLAP tools may be used in DSS systems Multidimentional view is fundamental Multidimentional view is fundamental

295 295CSE 5331/7331 F'2011 OLAP Implementations MOLAP (Multidimensional OLAP) MOLAP (Multidimensional OLAP) –Multidimential Database (MDD) –Specialized DBMS and software system capable of supporting the multidimensional data directly –Data stored as an n-dimensional array (cube) –Indexes used to speed up processing ROLAP (Relational OLAP) ROLAP (Relational OLAP) –Data stored in a relational database –ROLAP server (middleware) creates the multidimensional view for the user –Less Complex; Less efficient HOLAP (Hybrid OLAP) HOLAP (Hybrid OLAP) –Not updated frequently – MDD –Updated frequently - RDB

296 296CSE 5331/7331 F'2011 OLAP Operations Single CellMultiple CellsSliceDice Roll Up Drill Down

297 297CSE 5331/7331 F'2011 OLAP Operations Simple query – single cell in the cube Simple query – single cell in the cube Slice – Look at a subcube to get more specific information Slice – Look at a subcube to get more specific information Dice – Rotate cube to look at another dimension Dice – Rotate cube to look at another dimension Roll Up – Dimension Reduction; Aggregation Roll Up – Dimension Reduction; Aggregation Drill Down Drill Down Visualization: These operations allow the OLAP users to actually “see” results of an operation. Visualization: These operations allow the OLAP users to actually “see” results of an operation.

298 298CSE 5331/7331 F'2011 Relationship Between Topcs

299 299CSE 5331/7331 F'2011 Decision Support Systems Tools and computer systems that assist management in decision making Tools and computer systems that assist management in decision making What if types of questions What if types of questions High level decisions High level decisions Data warehouse – data which supports DSS Data warehouse – data which supports DSS

300 300CSE 5331/7331 F'2011 Unified Dimensional Model Microsoft Cube View Microsoft Cube View SQL Server 2005 SQL Server 2005 http://msdn2.microsoft.com/en- us/library/ms345143.aspx http://msdn2.microsoft.com/en- us/library/ms345143.aspx http://cwebbbi.spaces.live.com/Blog/cns!1pi7ET ChsJ1un_2s41jm9Iyg!325.entry http://cwebbbi.spaces.live.com/Blog/cns!1pi7ET ChsJ1un_2s41jm9Iyg!325.entry MDX AS2005 MDX AS2005 http://msdn2.microsoft.com/en- us/library/aa216767(SQL.80).aspx http://msdn2.microsoft.com/en- us/library/aa216767(SQL.80).aspx

301 301CSE 5331/7331 F'2011 Data Warehousing “ Subject-oriented, integrated, time-variant, nonvolatile” William Inmon “ Subject-oriented, integrated, time-variant, nonvolatile” William Inmon Operational Data: Data used in day to day needs of company. Operational Data: Data used in day to day needs of company. Informational Data: Supports other functions such as planning and forecasting. Informational Data: Supports other functions such as planning and forecasting. Data mining tools often access data warehouses rather than operational data. Data mining tools often access data warehouses rather than operational data. DM: May access data in warehouse.

302 302CSE 5331/7331 F'2011 Operational vs. Informational Operational DataData Warehouse ApplicationOLTPOLAP UsePrecise QueriesAd Hoc TemporalSnapshotHistorical ModificationDynamicStatic OrientationApplicationBusiness DataOperational ValuesIntegrated SizeGigabitsTerabits LevelDetailedSummarized AccessOftenLess Often ResponseFew SecondsMinutes Data SchemaRelationalStar/Snowflake

303 303CSE 5331/7331 F'2011Statistics Simple descriptive models Simple descriptive models Statistical inference: generalizing a model created from a sample of the data to the entire dataset. Statistical inference: generalizing a model created from a sample of the data to the entire dataset. Exploratory Data Analysis: Exploratory Data Analysis: –Data can actually drive the creation of the model –Opposite of traditional statistical view. Data mining targeted to business user Data mining targeted to business user DM: Many data mining methods come from statistical techniques.

304 304CSE 5331/7331 F'2011 Pattern Matching (Recognition) Pattern Matching: finds occurrences of a predefined pattern in the data. Pattern Matching: finds occurrences of a predefined pattern in the data. Applications include speech recognition, information retrieval, time series analysis. Applications include speech recognition, information retrieval, time series analysis. DM: Type of classification.

305 305CSE 5331/7331 F'2011 Image Mining Outline Image Mining – What is it? Image Mining – What is it? Feature Extraction Feature Extraction Shape Detection Shape Detection Color Techniques Color Techniques Video Mining Video Mining Facial Recognition Facial Recognition Bioinformatics Bioinformatics

306 306CSE 5331/7331 F'2011 The 2000 ozone hole over the antarctic seen by EPTOMS http://jwocky.gsfc.nasa.gov/multi/multi.html#hole

307 307CSE 5331/7331 F'2011 Image Mining – What is it? Image Retrieval Image Retrieval Image Classification Image Classification Image Clustering Image Clustering Video Mining Video Mining Applications Applications –Bioinformatics –Geology/Earth Science –Security –…

308 308CSE 5331/7331 F'2011 Feature Extraction Identify major components of image Identify major components of image Color Color Texture Texture Shape Shape Spatial relationships Spatial relationships Feature Extraction & Image Processing Feature Extraction & Image Processing http://users.ecs.soton.ac.uk/msn/book/ Feature Extraction Tutorial Feature Extraction Tutorial http://facweb.cs.depaul.edu/research/vc/VC_Worksh op/presentations/pdf/daniela_tutorial2.pdf http://facweb.cs.depaul.edu/research/vc/VC_Worksh op/presentations/pdf/daniela_tutorial2.pdf

309 309CSE 5331/7331 F'2011 Shape Detection Boundary/Edge Detection Boundary/Edge Detection Time Series – Eamonn Keogh Time Series – Eamonn Keogh http://www.engr.smu.edu/~mhd/8337sp07/sh apes.ppt http://www.engr.smu.edu/~mhd/8337sp07/sh apes.ppt

310 310CSE 5331/7331 F'2011 Color Techniques Color Representations Color RepresentationsRGB: http://en.wikipedia.org/wiki/Rgb HSV: http://en.wikipedia.org/wiki/HSV_color_space http://en.wikipedia.org/wiki/HSV_color_space Color Histogram Color Histogram Color Anglogram Color Anglogram http://www.cs.sunysb.edu/~rzhao/publications/Video DB.pdf http://www.cs.sunysb.edu/~rzhao/publications/Video DB.pdf

311 311CSE 5331/7331 F'2011 What is Similarity ? (c) Eamonn Keogh, eamonn@cs.ucr.edu

312 312CSE 5331/7331 F'2011 Video Mining Boundaries between shots Boundaries between shots Movement between frames Movement between frames ANSES: ANSES: http://mmir.doc.ic.ac.uk/demos/anses.html

313 313CSE 5331/7331 F'2011 Facial Recognition Based upon features in face Based upon features in face Convert face to a feature vector Convert face to a feature vector Less invasive than other biometric techniques Less invasive than other biometric techniques http://www.face-rec.org http://www.face-rec.org http://www.face-rec.org http://computer.howstuffworks.com/facial- recognition.htm http://computer.howstuffworks.com/facial- recognition.htm http://computer.howstuffworks.com/facial- recognition.htm http://computer.howstuffworks.com/facial- recognition.htm SIMS: SIMS: http://www.casinoincidentreporting.com/Products. aspx http://www.casinoincidentreporting.com/Products. aspx

314 314CSE 5331/7331 F'2011 Microarray Data Analysis Each probe location associated with gene Each probe location associated with gene Measure the amount of mRNA Measure the amount of mRNA Color indicates degree of gene expression Color indicates degree of gene expression Compare different samples (normal/disease) Compare different samples (normal/disease) Track same sample over time Track same sample over time Questions Questions –Which genes are related to this disease? –Which genes behave in a similar manner? –What is the function of a gene? Clustering Clustering –Hierarchical –K-means

315 315CSE 5331/7331 F'2011 Affymetrix GeneChip ® Array http://www.affymetrix.com/corporate/outreach/lesson_plan/educator_resources.affx

316 316CSE 5331/7331 F'2011 Microarray Data - Clustering "Gene expression profiling identifies clinically relevant subtypes of prostate cancer" Proc. Natl. Acad. Sci. USA, Vol. 101, Issue 3, 811-816, January 20, 2004

317 317CSE 5331/7331 F'2011


Download ppt "CSE 5331/7331 F'2011 1 CSE 5331/7331 Fall 2011 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering."

Similar presentations


Ads by Google