Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 12: Cluster analysis and segmentation of customers

Similar presentations


Presentation on theme: "Chapter 12: Cluster analysis and segmentation of customers"— Presentation transcript:

1 Chapter 12: Cluster analysis and segmentation of customers

2 Commercial applications
A chain of radio-stores uses cluster analysis for identifying three different customer types with varying needs. An insurance company is using cluster analysis for classifying customers into segments like the “self confident customer”, “the price conscious customer” etc. A producer of copying machines succeeds in classifying industrial customers into “satisfied” and “non-satisfied or quarrelling” customers.

3 Input-data 1 Output-data 2 3 4 Cluster: X1 X2 … Xn Obs. 1 Obs. 2
Obs. i Obs, m Cluster 1 Cluster 2 Classify rows Factor 1 Factor 2 Factor: X1 X2 X3… Xj…Xn Obs. 1 Obs. 2 Obs. m 3 4 Classify columns Figure Relatedness of multivariate methods: cluster analysis and factor analysis

4 Dependence and Independence methods
Dependence Methods: We assume that a variable (i.e. Y) depends on (are caused or determined by) other variables (X1, X2 etc.) Examples: Regression, ANOVA, Discriminant Analysis Independence Methods: We do not assume that any variable(s) is (are) caused by or determined by others. Basically, we only have X1, X2 ….Xn (but no Y) Examples: Cluster Analysis, Factor Analysis etc.

5 Dependence and Independence methods
Dependence Methods: The model is defined apriori (prior to survey and/or estimation) Examples: Regression, ANOVA, Discriminant Analysis Independence Methods: The model is defined aposteriori (after the survey and/or estimation has been carried out) Examples: Cluster Analysis, Factor Analysis etc. When using independence methods we let the data speak for themselves!

6 Dependence method: Multiple regression
Y (Sales) X1 (Price) X2 (Price Competitor) X3 (Adverting) Obs1 Obs2 Obs3 Obs4 Obs5 Obs6 Obs7 Obs8 Obs9 Obs10 . 95 90 80 85 .. 100 75 The primary focus is on the variables!

7 Independence method: Cluster analysis
X1 X2 X3 Obs1 Obs2 Obs3 Obs4 Obs5 Obs6 Obs7 Obs8 Obs9 Obs10 5 3 2 . 4 1 Cluster 1 Cluster 2 Cluster 3 The primary focus is on the observations!

8 Cluster analysis output: A new cluster-variable with a cluster-number on each respondent
X1 X2 X3 Cluster Obs1 Obs2 Obs3 Obs4 Obs5 Obs6 Obs7 Obs8 Obs9 Obs10 5 3 2 . 4 .. 1

9 Cluster analysis: A cross-tab between the cluster- variable and background + opinions is established
Age %-Females Household size Opinion 1 Opinion 2 Opinion 3 32 31 1.4 3.2 2.1 2.2 44 54 2.9 4.0 3.4 3.3 56 46 2.6 3.0 “Younger male nerds” Core-families with Traditional values “Senior-relaxers”

10 Cluster profiling: (hypothetical)
“Ecological shopper” Cluster 2: “Traditional shopper” Buy ecological food Advertisements funny Low price important 1 = Totally Agree Note: Finally the clusters’ respective media-behaviour needs to be uncovered

11 A small example of cluster analysis
Friendly (X02) Stagnant (X08)  distances Cluster John Bob Cathy John-Bob John-Cathy Bob-Cathy 5 1 4 3 2 8 6 A B

12 Governing principle Maximization of homogeneity within clusters
and simultaneously Maximization of heterogeneity across clusters

13 Partitioning/k-means
Non-overlapping (Exclusive) Methods Overlapping Methods Non-hierarchical Hierarchical Non-hierarchical/ Partitioning/k-means - Overlapping k-centroids Overlapping k-means Latent class techniques - Fuzzy clustering - Q-type Factor analysis (9) Agglomerative Divisive - Sequential threshold - Parallel threshold - Neural Networks - Optimized partitioning (8) Linkage Methods Centroid Variance Name in SPSS 1 2 3 4 5 6 7 8 9 Between-groups linkage Within-groups linkage Nearest neighbour Furthest neighbour Centroid clustering Median clustering Ward’s method K-means cluster (Factor) - Centroid (5) - Median (6) - Average - Between (1) - Within (2) - Weighted - Single - Ordinary (3) - Density - Two stage Density - Complete (4) - Ward (7) Note: Methods in italics are available In SPSS. Neural networks necessitate SPSS’ data mining tool Clementine Figure Overview of clustering methods

14 2 Non overlapping Overlapping Single Linkage: Minimum distance *
Complete Linkage: Maximum distance * Hierarchical Non-hierarchical Average Linkage: Average distance * Centroid method: Distance between centres * 1a 1b 1c 1b1 1b2 2 Agglomerative Divisive Wards method: Minimization of within-cluster variance * Figure Illustration of important clustering issues in Figure 12.1

15 Euclidean distance (Default in SPSS):
Y (x1, y1) (x2, y2) y2-y1 x2-x1 B * A * X d = (x2-x1)2 + (y2-y1)2 Other distances available in SPSS: City-Block uses  of absolute differences instead of squared differences of coordinates. Moreover: Minkowski distance, Cosine distance, Chebychev distance, Pearson Correlation.

16 Euclidean distance Y B (3, 5) * 5-2 A * (1, 2) 3-1 X

17 Which two pairs of points are to be clustered first?
G * A B * * F C * * D * E H * *

18 Maybe A/B and D/E (depending on algorithm!)
* A B * * F C * * D * E H * *

19 Quo vadis, C? G * A B * * C * D * E H * *

20 Quo vadis, C? (Continued)
G * A B * * C * D * E H * *

21 How does one decide which cluster a “newcoming” point is to join?
Measuring distances from point to clusters or points: “Farthest neighbour” (complete linkage) “Nearest neighbour” (single linkage) “Neighbourhood centre” (average linkage)

22 Quo vadis, C? (Continued)
G * A B * * 10,5 8,5 7,0 11,0 C * 8,5 D * 9,0 12,0 9,5 E H * *

23 Complete linkage G * A B * * 10,5 C * D * 9,5 E H * *
Minimize longest distance from cluster to point G * A B * * 10,5 C * D * 9,5 E H * *

24 Average linkage G * A B * * 8,5 C * D * 9,0 E H * *
Minimize average distance from cluster to point G * A B * * 8,5 C * D * 9,0 E H * *

25 Single linkage Minimize shortest distance from cluster to point G * A
B * * 7,0 C * 8,5 D * E H * *

26 Single linkage: Pitfall
* A and C merge into the same cluster omitting B! * Chaining or Snake-like clusters * Cluster formation begins A C * All the time the closest observation is put into the existing cluster(s) * B * * * * *

27 Single linkage: Advantage
* * * * ** * * Outliers * * * * Entropy group * * * * Good outlier detection and removal procedure in cases with “noisy” data sets

28 Cluster analysis Do our data at all permit the use of means?
More potential pitfalls & problems: Do our data at all permit the use of means? Some methods (i.e. Wards) are biased toward production of clusters with approximately the same number of observations. Other methods (i. e. Centroid) require data as input that are metric scaled. So, strictly speaking it is not allowable to use this algorithm, when clustering data containing interval scales (Likert- or semantic differential scales).

29 Cluster analysis: Small artificial example
1 0,68 0,92 0,42 0,58 3 2 6 4 5 Note: 6 points yield 15 possible pairwise distances - [n*(n-1)]/2

30 Cluster analysis: Small artificial example
1 0,68 3 0,42 2 6 0,92 4 5 0,58

31 Cluster analysis: Small artificial example
1 0,68 3 0,42 2 6 0,92 4 5 0,58

32 Dendrogram * * * * * * 0,2 0,4 0,6 0,8 1,0 OBS 1 OBS 2 Step 0: OBS 3
Each observation is treated as a separate cluster OBS 3 * OBS 4 * OBS 5 * OBS 6 * Distance Measure 0, , , , ,0

33 Dendrogram (Continued)
OBS 1 * Cluster 1 OBS 2 * Step 1: Two observations with smallest pairwise distances are clustered OBS 3 * OBS 4 * OBS 5 * OBS 6 * 0, , , , ,0

34 Dendrogram (Continued)
OBS 1 * Cluster 1 OBS 2 * Step 2: Two other observations with smallest distances amongst remaining points/clusters are clustered OBS 3 * OBS 4 * OBS 5 * Cluster 2 OBS 6 * 0, , , , ,0

35 Dendrogram (Continued)
OBS 1 * Cluster 1 OBS 2 * OBS 3 * Step 3: Observation 3 joins with cluster 1 OBS 4 * OBS 5 * Cluster 2 OBS 6 * 0, , , , ,0

36 Dendrogram (Continued)
OBS 1 * OBS 2 * “Supercluster” OBS 3 * OBS 4 * Step 4: Cluster 1 and 2 - from Step 3 joint into a “Supercluster” OBS 5 * OBS 6 * 0, , , , ,0 A single observation remains unclustered (Outlier)

37 Textbooks in Cluster Analysis
Brian S. Everitt Cluster Analysis for Social Scientists, 1983 Maurice Lorr Cluster Analysis for Researchers, 1984 Charles Romesburg Cluster Analysis, 1984 Aldenderfer and Blashfield

38 Case: Clustering of beer brands
Brand profiles based om the 17 semantic differential scales Purpose: to determine the market structure in terms of similar/different brands Hypothesis: reflects the competitive structure among brands due to consumers bahaviour

39 Case: Clustering of beer brands

40 Case: Clustering of beer brands

41 Case: Clustering of beer brands

42 Case: Clustering of beer brands

43 Case: Clustering of beer brands

44 Case: Clustering of beer brands

45 Case: Clustering of beer brands


Download ppt "Chapter 12: Cluster analysis and segmentation of customers"

Similar presentations


Ads by Google