Download presentation

Presentation is loading. Please wait.

Published byNicholas Vaughn Modified about 1 year ago

1
Data Mining Tutorial D. A. Dickey CopyrightCopyright © Time and Date AS / Steffen Thorsen All rights reserved. About us | Disclaimer | Privacy Create short URL to this page | Linking | Feedback: Home page | Site Map | Site Search | Date Menu | The World Clock | Calendar | CountdownAbout usDisclaimerPrivacy Create short URL to this Home pageSite MapSite SearchDate MenuThe World ClockCalendar Countdown

2
April : International Year of Statistics

3
Data Mining - What is it? Large datasets Fast methods Not significance testing Topics –Trees (recursive splitting) –Logistic Regression –Neural Networks –Association Analysis –Nearest Neighbor –Clustering –Etc.

4
Trees A “divisive” method (splits) Start with “root node” – all in one group Get splitting rules Response often binary Result is a “tree” Example: Loan Defaults Example: Framingham Heart Study Example: Automobile fatalities

5
Recursive Splitting X1=Debt To Income Ratio X2 = Age Pr{default} =0.007 Pr{default} =0.012 Pr{default} = Pr{default} =0.003 Pr{default} =0.006 No default Default

6
Some Actual Data Framingham Heart Study First Stage Coronary Heart Disease –P{CHD} = Function of: Age - no drug yet! Cholesterol Systolic BP Import

7
Example of a “tree” All 1615 patients Split # 1: Age “terminal node” Systolic BP

8
How to make splits? Which variable to use? Where to split? –Cholesterol > ____ –Systolic BP > _____ Goal: Pure “leaves” or “terminal nodes” Ideal split: Everyone with BP>x has problems, nobody with BP

9
Where to Split? First review Chi-square tests Contingency tables Heart Disease No Yes Low BP High BP 100 DEPENDENT INDEPENDENT Heart Disease No Yes

10
2 Test Statistic Expect 100(150/200)=75 in upper left if independent (etc. e.g. 100(50/200)=25) 95 (75) 5 (25) 55 (75) 45 (25) Heart Disease No Yes Low BP High BP (400/75)+ 2(400/25) = Compare to Tables – Significant! WHERE IS HIGH BP CUTOFF???

11
Measuring “Worth” of a Split P-value is probability of Chi-square as great as that observed if independence is true. (Pr { 2 >42.67} is 6.4E-11) P-values all too small. Logworth = -log 10 (p-value) = Best Chi-square max logworth.

12
Logworth for Age Splits Age 47 maximizes logworth ?

13
How to make splits? Which variable to use? Where to split? –Cholesterol > ____ –Systolic BP > _____ Idea – We picked BP cutoff to minimize p-value for 2 What does “significance” mean now?

14
Multiple testing 50 different BPs in data, 49 ways to split Sunday football highlights always look good! If he shoots enough times, even a 95% free throw shooter will miss. Tried 49 splits, each has 5% chance of declaring significance even if there’s no relationship.

15
Multiple testing = Pr{ falsely reject hypothesis 1} = Pr{ falsely reject hypothesis 2} Pr{ falsely reject one or the other} < 2 Desired: 0.05 probabilty or less Solution: use = 0.05/2 Or – compare 2(p-value) to 0.05

16
Multiple testing 50 different BPs in data, m=49 ways to split Multiply p-value by 49 Bonferroni – original idea Kass – apply to data mining (trees) Stop splitting if minimum p-value is large. For m splits, logworth becomes -log 10 (m*p-value) ! ! !

17
Other Split Evaluations Gini Diversity Index –{ A A A A B A B B C B} –Pick 2, Pr{different} = 1-Pr{AA}-Pr{BB}-Pr{CC} 1-[10+6+0]/45=29/45=0.64 LESS DIVERSE –{ A A B C B A A B C C } 1-[6+3+3]/45 = 33/45 = 0.73 MORE DIVERSE, LESS PURE Shannon Entropy –Larger more diverse (less pure) – - i p i log 2 (p i ) {0.5, 0.4, 0.1} 1.36 (less diverse) {0.4, 0.3, 0.3} 1.74 (more diverse)

18
Goals Split if diversity in parent “node” > summed diversities in child nodes Observations should be –Homogeneous (not diverse) within leaves –Different between leaves –Leaves should be diverse Framingham tree used Gini for splits

19
Validation Traditional stats – small dataset, need all observations to estimate parameters of interest. Data mining – loads of data, can afford “holdout sample” Variation: n-fold cross validation –Randomly divide data into n sets –Estimate on n-1, validate on 1 –Repeat n times, using each set as holdout.

20
Pruning Grow bushy tree on the “fit data” Classify holdout data Likely farthest out branches do not improve, possibly hurt fit on holdout data Prune non-helpful branches. What is “helpful”? What is good discriminator criterion?

21
Goals Want diversity in parent “node” > summed diversities in child nodes Goal is to reduce diversity within leaves Goal is to maximize differences between leaves Use validation average squared error, proportion correct decisions, etc. Costs (profits) may enter the picture for splitting or pruning.

22
Accounting for Costs Pardon me (sir, ma’am) can you spare some change? Say “sir” to male +$2.00 Say “ma’am” to female +$5.00 Say “sir” to female -$1.00 (balm for slapped face) Say “ma’am” to male -$10.00 (nose splint)

23
Including Probabilities True Gender M F Leaf has Pr(M)=.7, Pr(F)=.3. You say: Sir Ma’am 0.7 (2) 0.3 (-1) 0.7 (-10) 0.3 (5) Expected profit is 2(0.7)-1(0.3) = $1.10 if I say “sir” Expected profit is = -$5.50 (a loss) if I say “Ma’am” Weight leaf profits by leaf size (# obsns.) and sum Prune (and split) to maximize profits. +$ $5.50

24
Additional Ideas Forests – Draw samples with replacement (bootstrap) and grow multiple trees. Random Forests – Randomly sample the “features” (predictors) and build multiple trees. Classify new point in each tree then average the probabilities, or take a plurality vote from the trees

25
* Cumulative Lift Chart - Go from leaf of most to least predicted response. - Lift is proportion responding in first p% overall population response rate Lift 3.3 1

26
Regression Trees Continuous response Y Predicted response P i constant in regions i=1, …, 5 Predict 50 Predict 80 Predict 100 Predict 130 Predict 20 X1X1 X2X2

27
Predict P i in cell i. Y ij j th response in cell i. Split to minimize i j (Y ij -P i ) 2 Predict 50 Predict 80 Predict 100 Predict 130 Predict 20

28
Predict P i in cell i. Y ij j th response in cell i. Split to minimize i j (Y ij -P i ) 2

29
Real data example: Traffic accidents in Portugal* Y = injury induced “cost to society” * Tree developed by Guilhermina Torrao, (used with permission) NCSU Institute for Transportation Research & Education Help - I ran Into a “tree” Help - I ran Into a “tree”

30
Cool Nerdy “Analytics” “Statistics” “Predictive Modeling” “Regression” Another major tool: Regression (OLS: ordinary least squares)

31
If the Life Line is long and deep, then this represents a long life full of vitality and health. A short line, if strong and deep, also shows great vitality in your life and the ability to overcome health problems. However, if the line is short and shallow, then your life may have the tendency to be controlled by others

32
Wilson & Mather JAMA 229 (1974) X=life line length Y=age at death Result: Predicted Age at Death = – 1.367(lifeline) (Is this “real”??? Is this repeatable???) proc sgplot; scatter Y=age X=line; reg Y=age X=line; run ;

33
We Use LEAST SQUARES Squared residuals sum to 9609

34
Simulation: Age at Death = (life line) + e Error e has normal distribution mean 0 variance 200. Simulate 20 cases with n= 50 bodies each. NOTE: Regression equations : Age(rep:1) = *line. Age(rep:2) = *line. Age(rep:3) = *line. Age(rep:4) = *line. Age(rep:5) = *line. Age(rep:6) = *line. Age(rep:7) = *line. Age(rep:8) = *line. Age(rep:9) = *line. Age(rep:10) = *line. Age(rep:11) = *line. Age(rep:12) = *line. Age(rep:13) = *line. Age(rep:14) = *line. Age(rep:15) = *line. Age(rep:16) = *line. Age(rep:17) = *line. Age(rep:18) = *line. Age(rep:19) = *line. Age(rep:20) = *line. Predicted Age at Death = – 1.367(lifeline) Would NOT be unusual if there is no true relationship.

35
Conclusion: Estimated slopes vary Standard deviation (estimated) of sample slopes = “Standard error” Compute t = (estimate – hypothesized)/standard error p-value is probability of larger |t| when hypothesis is correct (e.g. 0 slope) p-value is sum of two tail areas. Traditionally p<0.05 implies hypothesized value is wrong. p>0.05 is inconclusive. Distribution of t Under H 0

36
proc reg data=life; model age=line; run; Parameter Estimates Parameter Standard Variable DF Estimate Error t Value Pr > |t| Intercept <.0001 Line Area

37
Conclusion: insufficient evidence against the hypothesis of no linear relationship. H0:H1:H0:H1: H 0 : Innocence H 1 : Guilt Beyond reasonable doubt P<0.05 H 0 : True slope is 0 (no association) H 1 : True slope is not 0 P=0.3965

38
Simulation: Age at Death = (life line) + e Error e has normal distribution mean 0 variance 200. WHY? Simulate 20 cases with n= 50 bodies each. Want estimate of variability around the true line. True variance is Use sums of squared residuals (SS). Sum of squared residuals from the mean is “SS(total)” 9755 Sum of squared residuals around the line is “SS(error)” 9609 (1) SS(total)-SS(error) is SS(model) = 146 (2) Variance estimate is SS(error)/(degrees of freedom) = 200 (3) SS(model)/SS(total) is R 2, i.e. proportion of variablity “explained” by the model. Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model Error Corrected Total Root MSE R-Square

39
Those Mysterious “Degrees of Freedom” (DF) First Martian information about average height 0 information about variation. 2 nd Martian gives first piece of information (DF) about error variance around mean. n Martians n-1 DF for error (variation)

40
Martian Height Martian Weight 2 points no information on variation of errors n points n-2 error DF

41
How Many Table Legs? (regress Y on X 1, X 2 ) X1X1 X2X2 error Fit a plane n-3 (37) error DF (2 “model” DF, n-1=39 “total” DF) Regress Y on X1 X2 … X7 n-8 error DF (7 “model” DF, n-1 “total” DF) Sum of Mean Source DF Squares Square Model Error Corrected Total Three legs will all touch the floor. Fourth leg gives first chance to measure error (first error DF).

42
Extension: Multiple Regression Issues: (1) Testing joint importance versus individual significance (2) Prediction versus modeling individual effects (3) Collinearity (correlation among inputs) Example: Hypothetical company’s sales Y depend on TV advertising X 1 and Radio Advertising X 2. Y = 0 + 1 X 1 + 2 X 2 +e Jointly critical (can’t omit both!!) Two engine plane can still fly if engine #1 fails Two engine plane can still fly if engine #2 fails Neither is critical individually

43
Data Sales; length sval $8; length cval $8; input store TV radio sales; (more code) cards; (more data) proc g3d data=sales; scatter radio*TV=sales/shape=sval color=cval zmin=8000; run; TV Sales Radio

44
Conclusion: Can predict well with just TV, just radio, or both! SAS code: proc reg data=next; model sales = TV radio; Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model <.0001 (Can’t omit both) Error Corrected Total Root MSE R-Square Explaining 95% of variation in sales Parameter Estimates Parameter Standard Variable DF Estimate Error t Value Pr > |t| Intercept TV (can omit TV) radio (can omit radio) Estimated Sales = TV radio with error variance (standard deviation 213). TV approximately equal to radio so, approximately Estimated Sales = TV or Estimated Sales = radio

45

46

47

48
Summary: Good predictions given by Sales = x TV x Radio or Sales = x TV or Sales = x Radio or (lots of others) Why the confusion? The evil Multicollinearity!! (correlated X’s)

49
Multicollinearity can be diagnosed by looking at principal components (axes of variation) Variance along PC axes “eigenvalues” of correlation matrix Direction axes point “eigenvectors” of correlation matrix TV $ Radio $ Principal Component Axis 1 Principal Component Axis 2 Proc Corr; Var TV radio sales; Pearson Correlation Coefficients, N = 40 Prob > |r| under H0: Rho=0 TV radio sales TV <.0001 <.0001 radio <.0001 <.0001 sales <.0001 <.0001

50
Grades vs. IQ and Study Time Data tests; input IQ Study_Time Grade; IQ_S = IQ*Study_Time; cards; ; Proc reg data=tests; model Grade = IQ; Proc reg data=tests; model Grade = IQ Study_Time; Parameter Standard Variable DF Estimate Error t Value Pr > |t| Intercept IQ Parameter Standard Variable DF Estimate Error t Value Pr > |t| Intercept IQ Study_Time

51
Contrast: TV advertising looses significance when radio is added. IQ gains significance when study time is added. Model for Grades: Predicted Grade = x IQ x Study Time Question: Does an extra hour of study really deliver 2.10 points for everyone regardless of IQ? Current model only allows this.

52
“Interaction” model: Predicted Grade = 0.13 x IQ 4.11 x Study Time x IQ x Study Time = (72.21 0.13 x IQ )+( x IQ )x Study Time IQ = 102 predicts Grade = ( )+( ) x Study Time = x Study Time IQ = 122 predicts Grade = ( )+( ) x Study Time = x Study Time proc reg; model Grade = IQ Study_Time IQ_S; Sum of Mean Source DF Squares Square F Value Pr > F Model Error Corrected Total Root MSE R-Square Parameter Standard Variable DF Estimate Error t Value Pr > |t| Intercept IQ Study_Time IQ_S

53
(1)Adding interaction makes everything insignificant (individually) ! (2)Do we need to omit insignificant terms until only significant ones remain? (3)Has an acquitted defendant proved his innocence? (4)Common sense trumps statistics! Slope = 1.30 Slope = 2.36

54
Logistic Regression “Trees” seem to be main tool. Logistic – another classifier Older – “tried & true” method Predict probability of response from input variables (“Features”) Linear regression gives infinite range of predictions 0 < probability < 1 so not linear regression.

55

56
Example: Seat Fabric Ignition Flame exposure time = X Ignited Y=1, did not ignite Y=0 –Y=0, X= 3, 5, 9 10, 13, 16 –Y=1, X = 7, 11, 12, 14, 15, 17, 25, 30 Q=(1-p 1 )(1-p 2 )p 3 (1-p 4 )(1-p 5 )p 6 p 7 (1-p 8 )p 9 p 10 (1- p 11 )p 12 p 13 p 14 p’s all different : p i =exp(a+bX i ) /(1+exp(a+bX i )) Find a,b to maximize Q(a,b)

57
Logistic idea: Map p in (0,1) to L in whole real line Use L = ln(p/(1-p)) Model L as linear in temperature, e.g. Predicted L = a + b(temperature) Given temperature X, compute L(x)=a+bX then p = e L /(1+e L ) p(i) = e a+bXi /(1+e a+bXi ) Write p(i) if response, 1-p(i) if not Multiply all n of these together, find a,b to maximize

58
DATA LIKELIHOOD; ARRAY Y(14) Y1-Y14; ARRAY X(14) X1-X14; DO I=1 TO 14; INPUT X(I) y(I) END; DO A = -3 TO -2 BY.025; DO B = 0.2 TO 0.3 BY.0025; Q=1; DO i=1 TO 14; L=A+B*X(i); P=EXP(L)/(1+EXP(L)); IF Y(i)=1 THEN Q=Q*P; ELSE Q=Q*(1-P); END; IF Q< THEN Q=0.0006; OUTPUT; END;END; CARDS; ; Generate Q for array of (a,b) values

59
Likelihood function (Q)

60
Concordant pair Discordant Pair

61
IGNITION DATA The LOGISTIC Procedure Analysis of Maximum Likelihood Estimates Standard Wald Parameter DF Estimate Error Chi-Square Pr > ChiSq Intercept TIME Association of Predicted Probabilities and Observed Responses Percent Concordant 79.2 Somers' D Percent Discordant 20.8 Gamma Percent Tied 0.0 Tau-a Pairs 48 c 0.792

62

63
Example: Shuttle Missions O-rings failed in Challenger disaster Low temperature Prior flights “erosion” and “blowby” in O-rings Feature: Temperature at liftoff Target: problem (1) - erosion or blowby vs. no problem (0)

64

65

66
Example: Framingham X=age Y=1 if heart trouble, 0 otherwise

67

68
Framingham The LOGISTIC Procedure Analysis of Maximum Likelihood Estimates Standard Wald Parameter DF Estimate Error Chi-Square Pr>ChiSq Intercept <.0001 age <.0001

69
Receiver Operating Characteristic Curve Logits of 1s Logits of 0s red black Cut point 1 Logits of 1s Logits of 0s red black

70
ROC Curve Logits of 1s Logits of 0s red black Cut point 2 Logits of 1s Logits of 0s red black

71
ROC Curve Logits of 1s Logits of 0s red black Cut point 3 Logits of 1s Logits of 0s red black

72
ROC Curve Cut point 3.5 Logits of 1s Logits of 0s red black

73
ROC Curve Cut point 4 Logits of 1s Logits of 0s red black

74
ROC Curve Logits of 1s Logits of 0s red black Cut point 5 Logits of 1s Logits of 0s red black

75
ROC Curve Cut point 6 Logits of 1s Logits of 0s red black

76
Unsupervised Learning We have the “features” (predictors) We do NOT have the response even on a training data set (UNsupervised) Clustering –Agglomerative Start with each point separated –Divisive Start with all points in one cluster then spilt –Direct State # clusters beforehand

77
EM CLUSTER NODE Step 1 – find (k=50) “seeds” as separated as possible Step 2 – cluster points to nearest seed (iterate – “k-means clustering”) Step 3 – aggregate clusters using Ward’s method & determine appropriate number of clusters – new k Step 4 – Use k means with this new k.

78
Clusters as Created

79
As Clustered – PROC FASTCLUS

80
Cubic Clustering Criterion (to decide # of Clusters) Divide random scatter of (X,Y) points into 4 quadrants Pooled within cluster variation much less than overall variation Large variance reduction Big R-square despite no real clusters CCC compares random scatter R-square to what you got to decide #clusters 3 clusters for “macaroni” data.

81
Classification Variables (dummy variables, indicator variables) Predicted Accidents = X 11 X 11 is 1 in November, 0 elsewhere. Interpretation: In November, predict (1) = In any other month predict (0) = is average of other months is added November effect (vs. average of others) Model for NC Crashes involving Deer: Proc reg data=deer; model deer = X11; Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model <.0001 Error Corrected Total Root MSE R-Square Parameter Standard Variable Label DF Estimate Error t Value Pr > |t| Intercept Intercept <.0001 X <.0001

82

83
Looks like December and October need dummies too! Proc reg data=deer; model deer = X10 X11 X12; Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model <.0001 Error Corrected Total Root MSE R-Square Parameter Standard Variable DF Estimate Error t Value Pr > |t| Intercept <.0001 X <.0001 X <.0001 X <.0001 Average of Jan through Sept. is 929 crashes per month. Add 1391 in October, 2830 in November, 1377 in December. date x10 x11 x12 JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC

84

85
What the heck – let’s do all but one (need “average of rest” so must leave out at least one) Proc reg data=deer; model deer = X1 X2 … X10 X11; Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model <.0001 Error Corrected Total Root MSE R-Square Parameter Estimates Parameter Standard Variable Label DF Estimate Error t Value Pr > |t| Intercept Intercept <.0001 X <.0001 X <.0001 X <.0001 X <.0001 X <.0001 X <.0001 X <.0001 X <.0001 X <.0001 X X <.0001 Average of rest is just December mean Subtract 886 in January, add 1452 in November. October (X10) is not significantly different than December.

86

87
negative positive

88
Add date (days since Jan in SAS) to capture trend Proc reg data=deer; model deer = date X1 X2 … X10 X11; Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model <.0001 Error Corrected Total Root MSE R-Square Parameter Estimates Parameter Standard Variable Label DF Estimate Error t Value Pr > |t| Intercept Intercept X <.0001 X <.0001 X <.0001 X <.0001 X <.0001 X <.0001 X <.0001 X <.0001 X <.0001 X X <.0001 date <.0001 Trend is 0.22 more accidents per day (1 per 5 days) and is significantly different from 0.

89

90

91

92

93

94

95
Neural Networks Very flexible functions “Hidden Layers” “Multilayer Perceptron” Logistic function of Logistic functions Of data output inputs

96
Arrows represent linear combinations of “basis functions,” e.g. logistic curves (hyperbolic tangents) b1b1 Y p1p1 Example: Y = a + b1 p1 + b2 p2 + b3 p3 Y = 4 + p1+ 2 p2 - 4 p3 b2b2 p2p2 p3p3 b3b3

97
Should always use holdout sample Perturb coefficients to optimize fit (fit data) –Nonlinear search algorithms Eliminate unnecessary complexity using holdout data. Other basis sets –Radial Basis Functions –Just normal densities (bell shaped) with adjustable means and variances.

98
Statistics to Data Mining Dictionary Statistics Data Mining (nerdy) (cool) Independent variables Features Dependent variable Target Estimation Training, Supervised Learning Clustering Unsupervised Learning Prediction Scoring Slopes, Betas Weights (Neural nets) Intercept Bias (Neural nets) Composition of Hyperbolic Neural Network Tangent Functions Radial Basis Function Normal Density and my personal favorite… Type I and Type II Errors Confusion Matrix

99
Association Analysis Market basket analysis –What they’re doing when they scan your “VIP” card at the grocery –People who buy diapers tend to also buy _________ (beer?) –Just a matter of accounting but with new terminology (of course ) –Examples from SAS Appl. DM Techniques, by Sue Walsh:

100
Termnilogy Baskets: ABC ACD BCD ADE BCE Rule Support Confidence X=>Y Pr{X and Y} Pr{Y|X} A=>D 2/5 2/3 C=>A 2/5 2/4 B&C=>D 1/5 1/3 ABCACDBCDADEBCE

101
Don’t be Fooled! Lift = Confidence /Expected Confidence if Independent Checking Saving No (1500) Yes (8500)(10000) No Yes SVG=>CHKG Expect 8500/10000 = 85% if independent Observed Confidence is 5000/6000 = 83% Lift = 83/85 < 1. Savings account holders actually LESS likely than others to have checking account !!!

102
TEXT MINING Hypothetical collection of s (“corpus”) from analytics students: John, message 1: There’s a good cook there. Susan, message 1: I have an analytics practicum then. Susan, message 2: I’ll be late from analytics. John, message 2: Shall we take the kids to a movie? John, message 3: Later we can eat what I cooked yesterday. (etc.) Compute word counts: analytics cook_n cook_v kids late movie practicum John Susan

103
Text Mining Mini-Example: Word counts in 16 s words G r P A o I r n c n s a a e t t c l r e C C u t y M M y r o o d i t o D K i l v L o o e J c i v a S i n i i a k k n o u c i t A d e s e t _ _ t b m s e a S s r t w e v n

104
Eigenvalues of the Correlation Matrix Eigenvalue Difference Proportion Cumulative 58% of the variation in these 12-dimensional vectors occurs in one Dimension (P1) Prin1 Job Practicum Analytics Movie Data SAS Kids Miner Grocerylist Interview Late Cook_v Cook_n P1 P2 Plotting 2 words only (analytics, family)

105

106
Prin1 Job Practicum Analytics Movie Data SAS Kids Miner Grocerylist Interview Late Cook_v Cook_n weights My counts My score On P1 3(.3177) _______ 1.530

107
My score 1.53

108
PROC CLUSTER (single linkage) on P1 scores of 16 documents (students)

109
G r P A o I d r n c n o C a a e t c L c l r e C C u U P t y M M y r o o m S r i t o D K i l v L o o e T i J c i v a S i n i i a k k n E n o u c i t A d e s e t _ _ t R 1 b m s e a S s r t w e v n List of 16 Scored Documents Sorted by 1 st P.C. Scores

110
Summary Data mining – a set of fast, automated methods for large data sets Some new ideas, many old or extensions of old Some methods: –Trees (recursive splitting) –Logistic Regression –Neural Networks –Association Analysis –Nearest Neighbor –Clustering –Text Mining, etc.

111
D.A.D.

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google