Presentation is loading. Please wait.

Presentation is loading. Please wait.

Let's review Data Analytics Technology, Supervised and Supervised

Similar presentations


Presentation on theme: "Let's review Data Analytics Technology, Supervised and Supervised"— Presentation transcript:

1 Let's review Data Analytics Technology, Supervised and Supervised
Let's review Data Analytics Technology, Supervised and Supervised. (from Data Mining, Han, Kamber 2nd ed) I think it ia good idea to review at this point. What is the industry understanding of classification, clustering and anomaly detection. (From Han and Kamber pgs., 24-27) Classifiers (supervised analytics) construct a model that describes and distinguishes data classes or concepts, for the purpose of being able to use the model to [quickly] predict the class of objects whose class label is unknown. The derived model is based on the analysis of a set of training data (i.e., data objects whose class label is known - usually a table with a class-label column or attribute). Classification is often called "prediction" if the class labels are numbers. Classification may need to be preceded by relevance analysis to identify and eliminate attributes that do not seem to contribute much information as to class. Clusterers (unsupervised analytics) analyze data objects without consulting a known class label (ususally none are present). Clustering can be used to generate class labels in a training set that doesn't have them (create a training set) Objects are clustered (grouped) by maximizing the intra-class similarity and minimizing the inter-class similarity. Clustering can facilitate taxonomy formation, i.e., organizing objects into a hierarchy of classes that group similar events together (using a dendogram?). Anomaly Detectors (outlier detection analytics) identify objects that don't comply with the behavior of the data model Most data mining methods discard outliers as noise or exceptions. However. in some applications, such as fraud detection, the rare events can be the more interesting. Outliers may be detected using statistical tests that assume a distribution or probability model for the data, using distance measures, or deviation-based methods which identify objects by examining differences in their main characteristics. "One person's noise may be another person's signal". So outlier mining is a analytic in its own right. Outlier mining can mean: 1. Given a set of n objects and k, find the top k objects in terms of dissimilarity from the rest of the objects. 2. Given a Classification Training Set, identify objects within each class that are outliers within that class (they are correctly classified but they are noticeably dissimilar from the other objects in the class. We may find outliers for an entire set of objects (those object that don't fit into a cluster or class) and then find objects within a cluster or class that are noticeably dissimilar to the other objects in that class. I.e., find set outliers, then class outliers. 3. Given a set of objects, determine "fuzzy" clusters, i.e., assign a degree of membership for each object in each cluster. In a way, a dendogram does that. There are Statistics-based, Distance-based, Density-based and Deviation-based outlier detectors (using a dissimilarity measure, reduce the overall dissimilarity of the object set by removing "deviation outliers".).

2 Size P_4bit P_8bit P_12bit P_16bit Horizontal 1 465 930 1390 1790 2990
Mohammad's results 2014_02_15: Experimental Result of Addition: Data size: 1 billion, 2 billion, 3 billion, 4 billion. Number of columns: 2 Bit width of the values: 4 bit, 8 bit, 12 bit and 16 bit . Vertical axis is time measured in millisecond. Horizontal axis is number of bit positions. Size P_4bit P_8bit P_12bit P_16bit Horizontal 1 465 930 1390 1790 2990 2 950 1910 2860 3590 5970 3 1420 2850 4270 5360 8950 4 1800 3560 5370 7140 12420

3 UDR: Can we create distributionX1+2 etc
UDR: Can we create distributionX1+2 etc. using only X1 and X2 basic ptrees (concurrently with the creation of distributionsX1, distributionX2)? An example: X1 3 1 2 P1,1 1 P1,0 1 X2 1 3 P2,1 1 P2,0 1 X1+2 4 1 3 P1+2,2 1 P1+2,1 1 P1+2,0 1 Let D=D1,2  e1+e2 Then DoX = 21P1,1+20P1, P2,1+20P2,0 = 21(P1,1+P2,1) (P1,0+P2,0) so we can make the 2 SPTS additions (the ones in parenthesis), shift the first left by 1 and add it to the second. But can we go directly to the UDR construction? P1+2,1 = (P1,1 XOR P2,1) AND (NOT(P1,0&P2,0)) ... Is there a more efficient way to get the X1+X2 distribution using this route? Md? We don't need the SPTS for X1+X2 and it's expensive to create it just to get its distribution. no carry from zero bit P1+2,0 = P1,0 XOR P2,0 Md: This seems like it would give us a tremendous advantage over the "horizontal datamining boys and girls" because even though they could concurrently create all diagonal distributions, X1, X2, X1+2, X1-2, ... in one pass down the table, we would be able to do it with concurrent programs which make one pass across the SPTSs for X1 and X2.

4 FAUST Outlier evaluator/detector (FAUST Outed) When the goal is to find outliers as quickly as possible. Outed recursively uses D = FurthestFromMedian-to-FurthestFromFurthestFromMedia (FFM to FFFFM) Avg=(8.5,4.7) Median=(9,3) d2(med,x) 68 40 50 36 17 26 37 53 64 29 d2(y1,x), D=y1->y9 4 2 8 17 68 196 170 200 153 145 181 164 85 xoD p3 p2 p1 p0 count y1 y2 y3 y4 y5 y6 y7 y8 y9 ya yb yc yd ye yf 1 y1y y7 2 y3 y y8 3 y y y9 ya 5 6 7 yf yb a yc b yd ye a b c d e f Dist [16,32) [32,48) [48,64) [64,80) [80,96) [96,112) [112,128) [128,144)[144,160) [160,176) [176,192) [192,208) [208,224) Outed won't work for big data. Finding outliers is inherently local and for big data there are too many localities to exhaustively search. We may need to enclose individual outliers in gapped [barrel] hulls. Those gapped hulls will likely be filled in when projecting onto a randomly chosen line. I.e., barrel gapping suffers from a chicken-egg problem. To effectively barrel gap an outlier, we first look for linear gaps on a projection line and then look for radial gaps going out from that line, but unless the projection line runs through the outlier, a radial gap isn't likely to appear (and since we don't know where the outliers are yet we can't place the projection line). At this point in time, I think the only way to proceed is to do the full clustering task and look for outliers as we go. So FAUST Oblique Cluster may be the best way to find outliers. The reason is that FAUST Oblique Cluster removes big clusters as it finds them (as it moves down the dendogram the dataset in question gets smaller an smaller by giant steps and therefore outliers are more and more likely to reveal themselves and being singletons gapped away from the rest of the cluster, especially extreme outliers (at the extremes of the projection sets) So a strategy that may work is to perform Oblique FAUST Cluster but with each iteration pay close attention to potential outliers and when a candidate is identified construct the SPTS of distance of fellow cluster points from that candidate (and if the minimum of those distance exceeds a threshold, declare it an outlier (wrt to that dendogram node?)

5 Outlier (Anomaly) Detection
On this slide we focus on outlier or anomaly detection specifically Algorithm-1: Look for outliers using linear projections onto e1,...,en and then possibly also diagonals, e1+e2, e1-e2, ... We look for singleton (and doubleton?, and tripleton?...) sets that are gapped away from a small hull of piecewise linear boundaries provided by these D vectors (we start out looking for coordinate hulls (rectangles) that provide a gap around 1 (or2? or 3?) points only. We do this by intersecting "thinnings" in each DoX distribution. (Clearly this can thus be done while clustering under FAUST Cluster which cuts at all precipitous count changes. Note, if all we are interested in is anomalies, then we would ignore all PCCs that are not involved in thinnings. This would save lots of time! Definition-1: A "thinning" is a PCD to below a threshold such that the very next PCC is a PCI to above that threshold. The thinning threshold should be  the PCC threshold.

6 FAUST Outlier Evaluator and Detector (FAUST OutED)
DensityCount/r2 labeled dendogram for FAUST on Spaeth with D=AvgMedian DET=.3 1 y1y y7 2 y3 y y8 3 y y y9 ya 5 6 7 yf yb a yc b yd ye a b c d e f So intersect thinnings [1,1]1, [5,7]1 and [13,14]1 with [4,10]2 A A

7 Classify x isa Ck iff lok,D  Dox  hik,D D.
Oblique FAUST Classifiers Fast, Analytic, Unsupervised and Supervised, Technology C = class, X = unclassified samples, r = a chosen minimum gap threshold. That is, classify x as class C iff there exists cC such that (c-x)o(c-x)  r2 FAUST One-Class Spherical (OCS) Construct a sphere around x. Then x is of the C class iff the sphere hits C. FAUST One-Class Linear (OCL) Construct a tight enclosure (= a hull, H) around C. x is class C iff xH. For a series of vectors, D, let loDmnCoD (or the 1st PCI); hiDmxCoD (or the last PCD). Classify xC iff loD  Dox  hiD D. E.g., let the D-series be the diagonals e1, e2, ...en, e1+e2, e1-e2, e1+e3, e1-e3, ...,e1-e2-...-en? (add more Ds until diamH-diamC < ε? FAUST Multi-Class Linear (MCL) Construct a tight hull, Hk, enclosing Ck k. Then x isa Ck iff k, xHk. Classify x isa Ck iff lok,D  Dox  hik,D D. FAUST MCL allows for a "none of the classes" when xHk,  k. mnAoD12 In FAUST MCL the Hks can be constructed in parallel: mnCoD12 mxCoD12 Convex hull Our hull, H c c c c c c c c cc c c c c cc c c c c c c c c c cc c c c c cc c a a a a a a a a aa a a a a aa a b b b b b b b b bb b b b b bb b D12=e1-e2 line mnC1 mxC1 mnA1 mxAoD12 mnB1 mxA1 mxB1 e1 e2 e3 mnBoD12 mxBoD12 D=e1+e2+e3 3D example of HULL1 D=e1+e3 D=e1-e3 e1 line 1-class classification Reference :

8 FAUST OCS One Class Spherical classifier on the Spaeth dataset as a "lazy" classifier
So yf is not in C since it is spherically gapped away from C by r=3 units. How expensive is the algorithm? For each x 1. Compute SPTS, (C-x)o(C-x) 2. Compute mask pTree (C-x)o(C-x) < r2 3. Count 1-bits in that mask pTree. Let the Class be {yb,yc.yd,ye}. OCS classify x=yf. Let r=3. 1 y1y y7 2 y3 y y8 3 y y y9 ya 5 6 7 yf yb a yc b yd ye a b c d e f C1 pTrees C2 pTrees C C C (c-x)o(c-x) yb yc yd ye yf Shortcut for 1.d,e,f by some comparison of hi bitslices of (C-x)o(C-x) with #9? (I think Yue Cui has a shortcut ???) C C yb yc yd ye x=yf C as a PTS is: and x=yf: Shortcut for 1.a,b,c,d,e,f: (C-x)o(C-x) = CoC -2Cox +|x|2 < r2 |x|2-r2 + CoC < 2Cox # # # 1a Compute SPTS (C-x)o(C-x): 1.f Conclude that yfC 1.b Form cosntant SPTSs 1.e Count the 1 bits = 0 1.c Construct (C-x)o(C-x) by SPTS arithmetic: (C-x)o(C-x)=(C1-#7)(C1-#7) + (C2-#8)(C2-#8) 1.d Construct the mask pTree (C-x)o(C-x)<#9 Precompute (1 time) SPTS CoC and PTS 2C (2C is just a re-labeling (shift left) of pTrees of C). For each new unclassified sample, x, add |x|2-r2 to CoC (adding one constant to one SPTS) compute 2Cox (n multiplications of one SPTS, 2Ci, by one constant, xi' then add the n resulting SPTSs. compare |x|2-r2 +CoC to 2Cox giving us a mask pTree. Count 1-bits in this mask pTree (shortcuts?, shortcuts?, shortcuts?) CoC pTrees CoC 2C1 pTrees C2 pTrees 2C C a = |x|2-r2 = 104 CoC+a pTrees CoC+a 2Cox pTrees 2Cox CoC+a>2Cox Ct=0 so x not in C 1class classify unclassified sample, x=(a,9). Let r=3. CoC+a pTrees CoC+a 2Cox pTrees 2Cox CoC+a>2Cox 1 Ct=4 so x in C

9 FAUST OCL One Class Linear classifier applied to IRIS, SEEDS, WINE, CONCRETE datasets
For series of D's = diagonals e1, e2, ...en, e1+e2, e1-e2, e1+e3, e1-e3, ...,e1-e2-...-en For IRIS with C=Vers, outliers=Virg, FAUST 1D: SLcutpts (49,70); SWcutpts(22,32); PLcutpts(33,49); PW Ctpts(10,16) 44 vers correct. 7 virg errors Trim outliers: ; :50, 1D_2D model classifies 50 vers (no eliminated outliers) and 3 virg in the 1class 1D_2D_3D model classifies 50 vers (no eliminated outliers) and 3 virg in the 1class 1D_2D_3D_4D model classifies 50 vers (no eliminated outliers) and 3 virg in the 1class The 3 persistent virg errors virg virg virg The 1D model classifies 50 class1 and 15 class2 incorrectly as class1. The 1D model classifies 50 class1 and 30 class3 incorrectly as class1. The 1D model classifies 50 class1 and 0 class3 incorrectly as class1. For SEEDS with C=class1 and outliers=class2 The 1D_2D model classifies 50 class1 and 8 class2 incorrectly as class1. The 1D_2D_3D model classifies 50 class1 and 8 class2 incorrectly as class1. The 1D_2D_3D_4D model classifies 50 class1 and 8 class2 incorrectly as class1. For SEEDS with C=class1 and outliers=class3 The 1D_2D model classifies 50 class1 and 27 class3 incorrectly as class1. The 1D_2D_3D model classifies 50 class1 and 27 class3 incorrectly as class1. The 1D_2D_3D_4D model classifies 50 class1 and 27 class3 incorrectly as class1. For SEEDS with C=class2 and outliers=class3 The 1D_2D model classifies 50 class1 and 0 class3 incorrectly as class1. The 1D_2D_3D model classifies 50 class1 and 0 class3 incorrectly as class1. The 1D_2D_3D_4D model classifies 50 class1 and 0 class3 incorrectly as class1. For WINE with C=class4 and outliers=class7 (Class 4 was enhanced with 3 class3's to fill out the 50) The 1D model classifies 50 class1 and 48 class3 incorrectly as class1. The 1D_2D model classifies 50 class1 and 43 class3 incorrectly as class1. The 1D_2D_3D model classifies 50 class1 and 43 class3 incorrectly as class1. The 1D_2D_3D_4D model classifies 50 class1 and 42 class3 incorrectly as class1. For CONCRETE, concLH with C=class(8-40) and outliers=class(43-67) The 1D model classifies 50 class1 and 43 class3 incorrectly as class1. The 1D_2D model classifies 50 class1 and 35 class3 incorrectly as class1. The 1D_2D_3D model classifies 50 class1 and 30 class3 incorrectly as class1. The 1D_2D_3D_4D model classifies 50 class1 and 27 class3 incorrectly as class1. For CONCRETE, concM (class is the middle range of strengths) The 1D model classifies 50 class1 and 47 class3 incorrectly as class1. The 1D_2D model classifies 50 class1 and 37 class3 incorrectly as class1. The 1D_2D_3D_4D model classifies 50 class1 and 26 class3 incorrectly as class1.

10 FAUST MCL Class1=C1={y1,y2.y3,y4. Class2=C2={y7,y8.y9}.
C e e y y y y y y y yb yc yd ye mn1 1 mx1 3 mn2 1 mx mn mx mn mx mn1 14 mx1 15 mx2 3 mn mx mn mx mn1 9 mx mn2 9 mx mn mx mn mx Class1=C1={y1,y2.y3,y4. xCk iff lok,D  Dox  hik,D D. Class2=C2={y7,y8.y9}. Class3=C3={yb,yc.yd,ye} 1 y1y y7 2 y3 y y8 y 3 y y y9 ya 5 6 7 yf yb a x yc b yd ye a b c d e f Shortcuts for MCL? Precompute all diagonal minimums and maximums; e1, e2, e1+e2, e1-e2. Then,,in fact, there is no pTree processing left to do (just straight forward number comparisons). xf On basis of e1 it is "none-or-the-above" 9,a It is in class3 (red) only ya On the basis of e1 it is "none-or-the-above" Versicolor 1D min max n n n n4 x x x x4 y On the basis of e1 it is "none-or-the-above" f, It is in class2 (green) only 1D MCL Hversicolor has 7 virginica! Versicolor 2D min max n12 n13 n14 n23 n24 n34 n1-2 n1-3 n1-4 n2-3 n2-4 n3-4 x12 x13 x14 x23 x24 x34 x1-2 x1-3 x1-4 x2-3 x2-4 x3-4 1D_2D MCL Hversicolor has 3 virginica! 1D_2D_3D MCL Hversicolor has 3 virginica! Versicolor 3D min max n n n n234 n n1-23 n n n1-24 x x x x234 x x1-23 x x x1-24 n n n1-34 n n n2-34 n2-3-4 x x x1-34 x x x2-34 x2-3-4 Versicolor 4D min max n1234 n n n n n n n x1234 x x x x x x x 1D_2D_3D_4D MCL Hversicolor has 3 virginica (24,27,28) 1D_2D_3D_4D MCL Hvirginica has 20 versicolor errors!! Look at removing outliers (gapped>=3) from Hullvirginica 23 Ct gp ''' e1 Ct gp ... 79 1 e2 Ct gp ... 38 2 e3 Ct gp ... 69 1 e4 Ct gp 25 3 12 Ct gp ... 13 Ct gp 104 ... 146 14 Ct gp ... 102 1 24 Ct gp no outliers 34 Ct gp ... 92 1 Hvirginica 12 versic Hvirginica 15 versic Hvirginica 3 versic 1D MCL Hvirginica only 16 versicolors! One possibility would be to keep track of those that are outliers to their class but are not in any other class hull and put a sphere around them. Then any unclassified sample that doesn't fall in any class hull would be checked to see if it falls in any of the class outlier spheres???

11 Choosing a clustering from a DEL and DUL labeled Dendogram
The algorithm for choosing the optimal clustering from a labeled dendogram is as follows: Let DET=.4 and DUT=½ Since a full dendogram is far bigger than the original table, we set threshold(s), We build a partial dendogram (ending a branch when threshold(s) are met) Then a slider for density would work as follows: The user set the threshold(s). We give the clustering. The user increases threshold(s). We prune the dendogram and give clustering. The user decreases threshold(s). We build each branches down further until the new threshold(s) are exceeded and give the new clustering. We might want to also display the dendogram to the user and let him select a "node=cluster" for further analysis, etc. DEL=.1 DUL=1/6 DEL=.2 DUL=1/8 DEL=.5 DUL=½ DEL=.3 DUL=½ DEL=.4 DUL=1 DEL= DUL= DEL= DUL= DEL= DUL= DEL= DUL= DEL= DUL= DEL= DUL= DEL= DUL= A B C D E F G

12 APPLYING FAUST Dendogram TO SPAETH
DensityCount/r2 labeled dendogram for FAUST Cluster on Spaeth with D=Avg-to-Furthest and DensityThreshold=.3 a bc def Y(.15) {y1,y2,y3,y4,y5}(.37) {y6,yf}(.08) {y7,y8,y9,ya,yb.yc.yd.ye}(.07) D=Avg-to-Furthest cut at 7 and 11 1 y1y y7 2 y3 y y8 3 y y y9 ya 5 6 7 yf yb a yc b yd ye c d e f a b c d e f {y1,y2,y3,y4}(.63) {y5}() {y6}() {yf}() {y7,y8,y9,ya}(.39) {yb,yc,yd,ye}(1.01) {y7,y8,y9}(1.27) {ya}() {y1,y2,y3}(2.54) {y4}() D=Avg-to-Furth DensityThresh=1 D=Avg-to-Furth DensityThresh=.5 D-line Labeled dendogram for FAUST Cluster on Spaeth with D=furthest-to-Avg, DensityThreshold=.3 1 y1y y7 2 y3 y y8 3 y y y9 ya 5 6 7 yf yb a yc b yd ye a b c d e f DCount/r2 labeled dendogram for FAUST Clusteron Spaeth with D=cylces thru diagonals nnxx,nxxn,nnxx,nxxn..., DensThresh=.3 Y(.15) {y1,y2,y3,y4,y5}(.37) {y6,y7,y8,y9,ya,yb.yc.yd.ye,yf}(.09) Y(.15) {y6,y7,y8,y9,ya}(.17) {yb,yc,yd,ye,yf}(.25) y1,2,3,4,5(.37 {y6,yf}(.08) {y7,y8,y9,ya,yb.yc.yd.ye}(.07) {y7,y8,y9,ya}(.39) {y6}() {yf}() {yb,yc,yd,ye}(1.01) {y6}() {yf}() {y7,8,9,a}(.39) {yb,yc,yd,ye}(1.01)

13 UDR Univariate Distribution Revealer (on Spaeth:)
15 UDR Univariate Distribution Revealer (on Spaeth:) applied to S, a column of numbers in bistlice format (an SpTS), will produce the DistributionTree of S DT(S) depth=h=0 depth=h=1 Y y1 y2 y y y y y y y y y ya 13 4 pb 10 9 yc 11 10 yd 9 11 ye 11 11 yf 7 8 yofM 11 27 23 34 53 80 118 114 125 110 121 109 83 p6 1 p5 1 p4 1 p3 1 p2 1 p1 1 p0 1 p6' 1 p5' 1 p4' 1 p3' 1 p2' 1 p1' 1 p0' 1 node2,3 [96.128) f= depthDT(S)b≡BitWidth(S) h=depth of a node k=node offset Nodeh,k has a ptr to pTree{xS | F(x)[k2b-h+1, (k+1)2b-h+1)} and its 1count p6' 1 5/64 [0,64) p6 10/64 [64,128) p5' 1 3/32[0,32) 2/32[64,96) p5 2/32[32,64) ¼[96,128) p3' 1 0[0,8) p3 1[8,16) 1[16,24) 1[24,32) 1[32,40) 0[40,48) 1[48,56) 0[56,64) 2[80,88) 0[88,96) 0[96,104) 2[194,112) 3[112,120) 3[120,128) p4' 1 1/16[0,16) p4 2/16[16,32) 1[32,48) 1[48,64) 0[64,80) 2[80,96) 2[96,112) 6[112,128) Pre-compute and enter into the ToC, all DT(Yk) plus those for selected Linear Functionals (e.g., d=main diagonals, ModeVector . Suggestion: In our pTree-base, every pTree (basic, mask,...) should be referenced in ToC( pTree, pTreeLocationPointer, pTreeOneCount ).and these OneCts should be repeated everywhere (e.g., in every DT). The reason is that these OneCts help us in selecting the pertinent pTrees to access - and in fact are often all we need to know about the pTree to get the answers we are after.).


Download ppt "Let's review Data Analytics Technology, Supervised and Supervised"

Similar presentations


Ads by Google