Presentation is loading. Please wait.

Presentation is loading. Please wait.

Support Vector Machines in Marketing Georgi Nalbantov MICC, Maastricht University.

Similar presentations


Presentation on theme: "Support Vector Machines in Marketing Georgi Nalbantov MICC, Maastricht University."— Presentation transcript:

1 Support Vector Machines in Marketing Georgi Nalbantov MICC, Maastricht University

2 2/20 Contents Purpose Linear Support Vector Machines Nonlinear Support Vector Machines (Theoretical justifications of SVM) Marketing Examples Conclusion and Q & A (some extensions)

3 3/20 Purpose Task to be solved (The Classification Task): Classify cases (customers) into “type 1” or “type 2” on the basis of some known attributes (characteristics) Chosen tool to solve this task: Support Vector Machines

4 4/20 The Classification Task Given data on explanatory and explained variables, where the explained variable can take two values {  1 }, find a function that gives the “best” separation between the “-1” cases and the “+1” cases: Given: ( x 1, y 1 ), …, ( x m, y m )   n  {  1 } Find:  :  n  {  1 } “best function” = the expected error on unseen data ( x m+1, y m+1 ), …, ( x m+k, y m+k ) is minimal Existing techniques to solve the classification task: Linear and Quadratic Discriminant Analysis Logit choice models (Logistic Regression) Decision trees, Neural Networks, Least Squares SVM

5 5/20 Support Vector Machines: Definition Support Vector Machines are a non-parametric tool for classification/regression Support Vector Machines are used for prediction rather than description purposes Support Vector Machines have been developed by Vapnik and co-workers

6 6/20 Number of art books purchased ∆ buyers ● non-buyers Months since last purchase Linear Support Vector Machines A direct marketing company wants to sell a new book: “The Art History of Florence” Nissan Levin and Jacob Zahavi in Lattin, Carroll and Green (2003). Problem: How to identify buyers and non- buyers using the two variables: Months since last purchase Number of art books purchased ∆ ● ∆ ∆ ● ● ● ● ∆ ∆ ∆ ● ● ● ● ● ● ∆ ∆ ∆

7 7/20 ∆ buyers ● non-buyers Number of art books purchased Months since last purchase Main idea of SVM: separate groups by a line. However: There are infinitely many lines that have zero training error… … which line shall we choose? Linear SVM: Separable Case ∆ ● ∆ ∆ ● ● ● ● ∆ ∆ ∆ ● ● ●

8 8/20 SVM use the idea of a margin around the separating line. The thinner the margin, the more complex the model, The best line is the one with the largest margin. ∆ buyers ● non-buyers Number of art books purchased margin Months since last purchase Linear SVM: Separable Case ∆ ● ∆ ∆ ● ● ● ● ∆ ∆ ∆ ● ● ●

9 9/20 The line having the largest margin is: w 1 x 1 + w 2 x 2 + b = 0 Where x 1 = months since last purchase x 2 = number of art books purchased Note: w 1 x i 1 + w 2 x i 2 + b  +1 for i  ∆ w 1 x j 1 + w 2 x j 2 + b  –1 for j  ● x2x2 x1x1 Months since last purchase Number of art books purchased margin Linear SVM: Separable Case w 1 x 1 + w 2 x 2 + b = 1 w 1 x 1 + w 2 x 2 + b = 0 w 1 x 1 + w 2 x 2 + b = -1 ∆ ● ∆ ∆ ● ● ● ● ∆ ∆ ∆ ● ● ●

10 10/20 The width of the margin is given by: Note: maximize the margin minimize Linear SVM: Separable Case x2x2 x1x1 Months since last purchase Number of art books purchased w 1 x 1 + w 2 x 2 + b = 1 w 1 x 1 + w 2 x 2 + b = 0 w 1 x 1 + w 2 x 2 + b = -1 margin ∆ ● ∆ ∆ ● ● ● ● ∆ ∆ ∆ ● ● ●

11 11/20 The optimization problem for SVM is: subject to: w 1 x i 1 + w 2 x i 2 + b  +1 for i  ∆ w 1 x j 1 + w 2 x j 2 + b  –1 for j  ● x2x2 x1x1 maximize the margin minimize Linear SVM: Separable Case margin ∆ ● ∆ ∆ ● ● ● ● ∆ ∆ ∆ ● ● ●

12 12/20 “Support vectors” are those points that lie on the boundaries of the margin The decision surface (line) is determined only by the support vectors. All other points are irrelevant x2x2 x1x1 “Support vectors” Linear SVM: Separable Case ∆ ● ∆ ∆ ● ● ● ● ∆ ∆ ∆ ● ● ●

13 13/20 Non-separable case: there is no line separating errorlessly the two groups Here, SVM minimize L(w,C) : subject to: w 1 x i 1 + w 2 x i 2 + b  +1 –  i for i  ∆ w 1 x j 1 + w 2 x j 2 + b  –1 +  i for j  ●  I,j  0 x2x2 x1x1 ∆ buyers ● non-buyers Training set: 1000 targeted customers maximize the margin minimize the training errors L(w,C) = Complexity + Errors Linear SVM: Nonseparable Case w 1 x 1 + w 2 x 2 + b = 1 ∆ ● ∆ ∆ ● ● ● ● ∆ ∆ ∆ ● ● ● ● ● ● ∆ ∆ ∆

14 14/20 C = 5 x2x2 x1x1 Bigger C ( thinner margin ) smaller number errors ( better fit on the data ) increased complexity Smaller C ( wider margin ) bigger number errors ( worse fit on the data ) decreased complexity Linear SVM: The Role of C ∆ ∆ ● ∆ ∆ ∆ ● ● ● ● x2x2 x1x1 C = 1∆ ● ∆ ∆ ∆ ● ● ● ● ∆ Vary both complexity and empirical error via C … by affecting the optimal w and optimal number of training errors

15 15/20 Mapping into a higher-dimensional space Optimization task: minimize L(w,C) subject to: ∆ ● x2x2 x1x1 Nonlinear SVM: Nonseparable Case ∆ ● ∆ ∆ ● ● ● ● ∆ ∆ ∆ ● ● ● ● ● ● ∆ ∆ ∆

16 16/20 Nonlinear SVM: Nonseparable Case  Map the data into higher-dimensional space:  2  3 (1,-1) x2x2 (1,1)(-1,1) (-1,-1) ∆ ∆● ● x1x1 ∆ ∆ ● ● ● ∆

17 17/20 Nonlinear SVM: Nonseparable Case  Find the optimal hyperplane in the transformed space (1,-1) x2x2 (1,1)(-1,1) (-1,-1) ∆ ∆● ● x1x1 ∆ ∆ ● ● ∆ ●

18 18/20 Nonlinear SVM: Nonseparable Case  Observe the decision surface in the original space (optional) x2x2 ∆ ∆● ● x1x1 ∆ ∆ ● ● ∆ ●

19 19/20 Nonlinear SVM: Nonseparable Case  Dual formulation of the (primal) SVM minimization problem PrimalDual Subject to

20 20/20 Nonlinear SVM: Nonseparable Case  Dual formulation of the (primal) SVM minimization problem Dual (kernel function) Subject to

21 21/20 Nonlinear SVM: Nonseparable Case  Dual formulation of the (primal) SVM minimization problem Dual Subject to (kernel function)

22 22/20 Strengths of SVM: Training is relatively easy No local minima It scales relatively well to high dimensional data Trade-off between classifier complexity and error can be controlled explicitly via C Robustness of the results The “curse of dimensionality” is avoided Weaknesses of SVM: What is the best trade-off parameter C ? Need a good transformation of the original space Strengths and Weaknesses of SVM

23 23/20 The Ketchup Marketing Problem Two types of ketchup: Heinz and Hunts Seven Attributes Feature Heinz Feature Hunts Display Heinz Display Hunts Feature&Display Heinz Feature&Display Hunts Log price difference between Heinz and Hunts Training Data: 2498 cases (89.11% Heinz is chosen) Test Data: 300 cases (88.33% Heinz is chosen)

24 24/20 C σ Cross-validation mean squared errors, SVM with RBF kernel minmax Do (5-fold ) cross-validation procedure to find the best combination of the manually adjustable parameters (here: C and σ) The Ketchup Marketing Problem Choose a kernel mapping: Linear kernel Polynomial kernel RBF kernel

25 25/20 Model Linear Discriminant Analysis The Ketchup Marketing Problem – Training Set Heinz Predicted Group MembershipTotal HuntsHeinzHit Rate OriginalCountHunts6820427289.51% Heinz5821682226 %Hunts25.00%75.00%100.00% Heinz2.61%97.39%100.00%

26 26/20 Model Logit Choice Model The Ketchup Marketing Problem – Training Set Heinz Predicted Group MembershipTotal HuntsHeinzHit Rate OriginalCountHunts2145827277.79% Heinz49717292226 %Hunts78.68%21.32%100.00% Heinz22.33%77.67%100.00%

27 27/20 Model Support Vector Machines The Ketchup Marketing Problem – Training Set Heinz Predicted Group MembershipTotal HuntsHeinzHit Rate OriginalCountHunts2551727299.08% Heinz622202226 %Hunts93.75%6.25%100.00% Heinz0.27%99.73%100.00%

28 28/20 Model Majority Voting The Ketchup Marketing Problem – Training Set Heinz Predicted Group MembershipTotal HuntsHeinzHit Rate OriginalCountHunts0272 89.11% Heinz02226 %Hunts0%100%100.00% Heinz0%100%100.00%

29 29/20 Model Linear Discriminant Analysis The Ketchup Marketing Problem – Test Set Heinz Predicted Group MembershipTotal HuntsHeinzHit Rate OriginalCountHunts3323588.33% Heinz3262265 %Hunts8.57%91.43%100.00% Heinz1.13%98.87%100.00%

30 30/20 Model Logit Choice Model The Ketchup Marketing Problem – Test Set Heinz Predicted Group MembershipTotal HuntsHeinzHit Rate OriginalCountHunts2963577% Heinz63202265 %Hunts82.86%17.14%100.00% Heinz23.77%76.23%100.00%

31 31/20 Model Support Vector Machines The Ketchup Marketing Problem – Test Set Heinz Predicted Group MembershipTotal HuntsHeinzHit Rate OriginalCountHunts25103595.67% Heinz3262265 %Hunts71.43%28.57%100.00% Heinz1.13%98.87%100.00%

32 32/20 Conclusion Support Vector Machines (SVM) can be applied in the binary and multi-class classification problems SVM behave robustly in multivariate problems Further research in various Marketing areas is needed to justify or refute the applicability of SVM Support Vector Regressions (SVR) can also be applied http://www.kernel-machines.org Email: nalbantov@few.eur.nl


Download ppt "Support Vector Machines in Marketing Georgi Nalbantov MICC, Maastricht University."

Similar presentations


Ads by Google