Market Intelligence Session 10

Slides:



Advertisements
Similar presentations
1 Ganesh Iyer Perceptual Mapping XMBA Session 3 Summer 2008.
Advertisements

Aaker, Kumar, Day Seventh Edition Instructor’s Presentation Slides
Mutidimensional Data Analysis Growth of big databases requires important data processing.  Need for having methods allowing to extract this information.
Factor Analysis Continued
Chapter Nineteen Factor Analysis.
Dimension reduction (1)
Factor Analysis for Data Reduction. Introduction 1. Factor Analysis is a set of techniques used for understanding variables by grouping them into “factors”
AEB 37 / AE 802 Marketing Research Methods Week 7
Lecture 7: Principal component analysis (PCA)
Principal Components An Introduction Exploratory factoring Meaning & application of “principal components” Basic steps in a PC analysis PC extraction process.
Perceptual Mapping EWMBA 206
19-1 Chapter Nineteen MULTIVARIATE ANALYSIS: An Overview.
Cluster Analysis (1).
Goals of Factor Analysis (1) (1)to reduce the number of variables and (2) to detect structure in the relationships between variables, that is to classify.
N. Kumar, Asst. Professor of Marketing Database Marketing Factor Analysis.
Factor Analysis Psy 524 Ainsworth.
Statistics for Marketing & Consumer Research Copyright © Mario Mazzocchi 1 Correspondence Analysis Chapter 14.
The Tutorial of Principal Component Analysis, Hierarchical Clustering, and Multidimensional Scaling Wenshan Wang.
Chapter 3 Data Exploration and Dimension Reduction 1.
© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Marketing Research Aaker, Kumar, Day Seventh Edition Instructor’s Presentation Slides.
Marketing Research Aaker, Kumar, Day and Leone Tenth Edition Instructor’s Presentation Slides 1.
Learning Objectives Copyright © 2002 South-Western/Thomson Learning Multivariate Data Analysis CHAPTER seventeen.
Factor Analysis © 2007 Prentice Hall. Chapter Outline 1) Overview 2) Basic Concept 3) Factor Analysis Model 4) Statistics Associated with Factor Analysis.
Principal Component Analysis Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Chapter 9 Factor Analysis
Advanced Correlational Analyses D/RS 1013 Factor Analysis.
By: Amani Albraikan.  Pearson r  Spearman rho  Linearity  Range restrictions  Outliers  Beware of spurious correlations….take care in interpretation.
Factor Analysis Psy 524 Ainsworth. Assumptions Assumes reliable correlations Highly affected by missing data, outlying cases and truncated data Data screening.
Thursday AM  Presentation of yesterday’s results  Factor analysis  A conceptual introduction to: Structural equation models Structural equation models.
Descriptive Statistics vs. Factor Analysis Descriptive statistics will inform on the prevalence of a phenomenon, among a given population, captured by.
Marketing Research Aaker, Kumar, Day and Leone Tenth Edition Instructor’s Presentation Slides 1.
Principal Components Analysis. Principal Components Analysis (PCA) A multivariate technique with the central aim of reducing the dimensionality of a multivariate.
Lecture 12 Factor Analysis.
Multivariate Analysis and Data Reduction. Multivariate Analysis Multivariate analysis tries to find patterns and relationships among multiple dependent.
Market Intelligence Session 11 Perceptual Maps II Simulated Test Markets.
Education 795 Class Notes Factor Analysis Note set 6.
Principle Component Analysis and its use in MA clustering Lecture 12.
Multidimensional Scaling and Correspondence Analysis © 2007 Prentice Hall21-1.
Factor Analysis I Principle Components Analysis. “Data Reduction” Purpose of factor analysis is to determine a minimum number of “factors” or components.
Applied Quantitative Analysis and Practices LECTURE#19 By Dr. Osman Sadiq Paracha.
Multidimensional Scaling
11 Business-to-Business Marketing Differentation Strategies Haas School of Business UC Berkeley Fall 2008 Week 7 Zsolt Katona.
Competitor Research: Positioning. Competitor Research-Positioning Positioning is something you do to the mind of the prospect It is not enough to invent.
Principal Component Analysis
Chapter Seventeen Copyright © 2004 John Wiley & Sons, Inc. Multivariate Data Analysis.
FACTOR ANALYSIS.  The basic objective of Factor Analysis is data reduction or structure detection.  The purpose of data reduction is to remove redundant.
Market Intelligence Class 9. Clothing retailers 2.
Chapter 14 EXPLORATORY FACTOR ANALYSIS. Exploratory Factor Analysis  Statistical technique for dealing with multiple variables  Many variables are reduced.
FACTOR ANALYSIS & SPSS. First, let’s check the reliability of the scale Go to Analyze, Scale and Reliability analysis.
Basic statistical concepts Variance Covariance Correlation and covariance Standardisation.
1 FACTOR ANALYSIS Kazimieras Pukėnas. 2 Factor analysis is used to uncover the latent (not observed directly) structure (dimensions) of a set of variables.
Lecture 2 Survey Data Analysis Principal Component Analysis Factor Analysis Exemplified by SPSS Taylan Mavruk.
Perceptual maps.
EXPLORATORY FACTOR ANALYSIS (EFA)
Factor analysis Advanced Quantitative Research Methods
Principal Component Analysis (PCA)
Multidimensional Scaling and Correspondence Analysis
Multidimensional Scaling
Descriptive Statistics vs. Factor Analysis
Measuring latent variables
Multidimensional Scaling
Principal Component Analysis
Chapter_19 Factor Analysis
Cluster analysis Presented by Dr.Chayada Bhadrakom
Measuring latent variables
Presentation transcript:

Market Intelligence Session 10 Perceptual Maps

Perceptual Mapping Visual representation of customer perceptions Shows how target customers view competing alternatives in a Euclidean space representing the market Pair-wise distances between alternatives indicate how close or far apart the products are in the minds of customers We can also acquire information from customers to allow us to generate visual maps of te competitive landscape. I think you were introduced to these in the core, right? The idea is that we can solicit information from customers and plot that information in a way that we can get some 2 or 3 dimensional representation of how similar or dissimilar brands are perceived. When you look at a map the brands that are perceived to be more similar will be closer. We have multiple ways to get these maps – we can get direct measures of perceived similarity or we can solicit information on a number of attributes and then use techniques to reduce the data into only a few dimensions and plot. We will discuss both a bit more.

Some examples…

Clothing retailers

Chips

Sports apparel

Perceptual Mapping Uses of maps Identify your closest competitors Suggest repositioning strategies Suggest advertising themes supporting repositioning Identify new product opportunities where some segment not well served by current brands We can also acquire information from customers to allow us to generate visual maps of te competitive landscape. I think you were introduced to these in the core, right? The idea is that we can solicit information from customers and plot that information in a way that we can get some 2 or 3 dimensional representation of how similar or dissimilar brands are perceived. When you look at a map the brands that are perceived to be more similar will be closer. We have multiple ways to get these maps – we can get direct measures of perceived similarity or we can solicit information on a number of attributes and then use techniques to reduce the data into only a few dimensions and plot. We will discuss both a bit more.

Perceptual Mapping 2 types of maps, based on different ways of measuring similarity between brands: 1. Similarity-Based Map Based on ratings of overall similarity b/w brands Multidimensional scaling (MDS) to analyze 2. Attribute-Based Map Based on ratings of brands on various perceptual attributes Brands that are highly correlated on attributes are similar Factor Analysis/Principal Components Analysis to analyze We can also acquire information from customers to allow us to generate visual maps of te competitive landscape. I think you were introduced to these in the core, right? The idea is that we can solicit information from customers and plot that information in a way that we can get some 2 or 3 dimensional representation of how similar or dissimilar brands are perceived. When you look at a map the brands that are perceived to be more similar will be closer. We have multiple ways to get these maps – we can get direct measures of perceived similarity or we can solicit information on a number of attributes and then use techniques to reduce the data into only a few dimensions and plot. We will discuss both a bit more.

When to use similarity vs. attribute based? Advantages to similarity based maps: Allows you to map products without specifying list of attributes Better for “softer” attributes which we do not verbalize well (feel, aesthetics, smell) Disadvantages to similarity based maps: Impractical when number of products/brands is large Interpretation of axes is more difficult

When to use similarity vs. attribute based? Advantages to attribute-based map Works well for hard or functional attributes (product features) Fewer questions required of respondents (vs. similarity), especially with large number of considered products Disadvantages to attribute-based maps Researchers needs to clearly conceptualize attributes Misleading if attributes are not ones most important to consumers Implicit equal weighting of attributes

Similarity Based Map Generate relevant set of objects brand, products Relevance: set of products chosen must be the set of competitive products that are relevant for managerial decision making Have respondents rate similarity (e.g. 1-10 pt scale) between every possible brand pairing Can perfectly represent 3 brands in 2 dimensions, but if more than 3, there will be information loss MDS is a mathematical technique used to analyze similarity perceptions with minimum information loss

Similarity based map: Soap example

Similarity based map: Soap example Aggregate across respondents so these are averages

SPSS Commands – similarity based Analyze – Scale – Multidimensional scaling (Proxscal) Select Define Select variables (brands to include) Model Proximity transformations: Interval Shape: Upper triangular matrix Proximities: Similarities Dimension: min = 2, max = 2 Plots Check “common space”

SPSS Output – similarity based Check fit of model (2 dimensions) Goodness of fit “S-Stress”. Want it less than 0.10

X, Y coordinates can be plotted

Similarity Based Map

Labeling dimensions Not always obvious 3 ways to generate labels Your own judgment Have respondents look at dimensions Run 2 regression with various attributes as predictors: once with X coordinates as DV, then with Y coordinates as DV

Applications Where are we and competition on key dimensions? Who are Dove’s biggest competitors? Which brand is seen as most different from Dial? Are there clusters of brands (substitution) or are they spread out? Are there gaps in the market? What would you want to know first?

Similarity Based Map

Next step: Plotting ideal points Ask respondents to rate similarity between each brand and their “ideal” on same scale as before Their ideal becomes another “brand” in analysis

Similarity based map with ideal point

Mapping ideal points Run analysis separately for each respondent to get individual x,y coordinates for “ideal”

Similarity map with 1 person’s ideal point

+ Final step Create scatterplot with: Original coordinates (from aggregate data) for each brand Each respondent’s ideal point coordinates (gotten from separate MDS for each person) Baesd on averages + For each person…

Brands Safeguard Ivory Dial Irish Spring Dove Caress Lever 2000

With ideal points Ivory Safeguard Dial Dove Caress Irish Spring Lever 2000

Applications Are there unmet needs in the market? (any ideal points with no brand close by?) Segments of consumers who want different things? Competitor analysis Repositioning strategy? Brand/line extension opportunities? What should I communicate to customers?

Perceptual Mapping: Type 2: Attribute-based Based on ratings of brands on different attributes Steps Generate list of relevant brands Generate list of key attributes Lets do and example. This relies on some old data that academics pulled together a few decades ago. Assume we have a list of beer brands, … Given that they are familiar Rate Bud on mild flavor, rate miller on mild flavor, … I will show you the setup for the data table, but we are going to get for each person, for the brands they are familiar with a rating for how strongly they agree or disagree that a particular beer has mild flavor or whatever the attribute is. We can then let SPSS run what is known as a factor analysis to produce a reduced set of dimensions that underlie the consumers perceptions. CLARIFICATION HERE FOR ALL. Before we start looking at the creation process. As a user you need to understand this tool and the importance of being able to read and interpret a perceptual map. You should also understand what the data requirements are – what information do you need to get from customers … BUT, it is beyond the scope of the class to ask you to understand the analytical processes that SPSS or SAS or STATA are using to generate the data reduction. For those interested there is some additional discussion in the optional text on reserve in the library and I will post some optional material and a dataset if there are some people who would like to mess around with some data.

Car example Cars Attributes Ford Infiniti Cadillac Camero Mercedes Mazda Buick Porsche Kia Audi Attributes unreliable roomy Prestige Highquality Lowprofiletires Sporty Powerfulengine Smoothride Tighthandling Poorvalue Attractive Quiet Poorlybuilt Uncomfortable Premiumsound- system

Perceptual Mapping: Attribute-based Based on ratings of brands on different attributes Steps Generate list of relevant brands Generate list of key attributes Consumers rate each brand on each attribute Lets do and example. This relies on some old data that academics pulled together a few decades ago. Assume we have a list of beer brands, … Given that they are familiar Rate Bud on mild flavor, rate miller on mild flavor, … I will show you the setup for the data table, but we are going to get for each person, for the brands they are familiar with a rating for how strongly they agree or disagree that a particular beer has mild flavor or whatever the attribute is. We can then let SPSS run what is known as a factor analysis to produce a reduced set of dimensions that underlie the consumers perceptions. CLARIFICATION HERE FOR ALL. Before we start looking at the creation process. As a user you need to understand this tool and the importance of being able to read and interpret a perceptual map. You should also understand what the data requirements are – what information do you need to get from customers … BUT, it is beyond the scope of the class to ask you to understand the analytical processes that SPSS or SAS or STATA are using to generate the data reduction. For those interested there is some additional discussion in the optional text on reserve in the library and I will post some optional material and a dataset if there are some people who would like to mess around with some data.

For each brand, ask consumers to rate to what extent each attribute describes the brand Car X Strongly Strongly Disagree Agree 1 2 3 4 5 6 7 8 9 10 Attribute A ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ Attribute B ____ ____ ____ ____ ____ ____ ____ ____ ____ ____

SPSS DATA – attribute based map

Perceptual Mapping: Attribute-based Based on ratings of brands on different attributes Steps Generate list of relevant brands Generate list of key attributes Consumers rate each brand on each attribute Factor analyze matrix of attribute ratings (use a separate row for each brand for each respondent) Lets do and example. This relies on some old data that academics pulled together a few decades ago. Assume we have a list of beer brands, … Given that they are familiar Rate Bud on mild flavor, rate miller on mild flavor, … I will show you the setup for the data table, but we are going to get for each person, for the brands they are familiar with a rating for how strongly they agree or disagree that a particular beer has mild flavor or whatever the attribute is. We can then let SPSS run what is known as a factor analysis to produce a reduced set of dimensions that underlie the consumers perceptions. CLARIFICATION HERE FOR ALL. Before we start looking at the creation process. As a user you need to understand this tool and the importance of being able to read and interpret a perceptual map. You should also understand what the data requirements are – what information do you need to get from customers … BUT, it is beyond the scope of the class to ask you to understand the analytical processes that SPSS or SAS or STATA are using to generate the data reduction. For those interested there is some additional discussion in the optional text on reserve in the library and I will post some optional material and a dataset if there are some people who would like to mess around with some data.

Factor Analysis – Attribute based Data reduction technique that is useful in mapping. Identifies a (hopefully) small number of factors or dimensions that represent the relationships in the larger set of attributes. For perceptual map: do 2 factors capture a high percentage of the variance in the data? Observed correlations in the data are assumed to be the result of sharing the latent (unobserved) factors.

SPSS Commands – attribute based Note: Lots of alternatives here, a basic example Analyze – Dimension Reduction – Factor Select variables (attributes to include, do not include the brands here) Descriptives Initial Solution (Correlation) Coefficients Extraction Method: principle components Correlation Matrix Unrotated Factor Solution Extract – Fixed Number of Factors – 2 Rotation varimax Loading Plots rotated solution Scores Save as variables (regression method) Display Factor Score Coefficient Matrix Options Sorted by size

Output - Correlations Provides a descriptive pairwise correlation matrix. You can get a feel for the data, e.g., “unreliable” and “high quality” should be negatively correlated. unreliable roomy prestige highquality lowprofiletires Correlation unreliable Sporty powerfulengine smoothride tighthandling poorvalue attractive quiet poorlybuilt uncomfortable premiumsound- system 1.000 .792 -.871 -.955 -.639 -.248 -.570 .360 -.166 .889 -.679 -.055 .931 .177 -.670 -.515 -.867 -.704 -.628 -.755 .532 -.308 .831 -.422 .086 .744 -.151 -.537 .845 .365 -.091 .214 .028 -.057 -.756 .463 .252 -.836 -.185 .426 -9.55 .605 .392 .639 -.407 .291 -.956 .603 .164 -.898 -.033 .485 .541 .542 -.485 -.516 .454 -.645 -.131 .788

Output - Correlations Provides a descriptive pairwise correlation matrix. You can get a feel for the data, e.g., “unreliable” and “high quality” should be negatively correlated. unreliable roomy prestige highquality lowprofiletires Correlation unreliable Sporty powerfulengine smoothride tighthandling poorvalue attractive quiet poorlybuilt uncomfortable premiumsound- system 1.000 .792 -.871 -.955 -.639 -.248 -.570 .360 -.166 .889 -.679 -.055 .931 .177 -.670 -.515 -.867 -.704 -.628 -.755 .532 -.308 .831 -.422 .086 .744 -.151 -.537 .845 .365 -.091 .214 .028 -.057 -.756 .463 .252 -.836 -.185 .426 -9.55 .605 .392 .639 -.407 .291 -.956 .603 .164 -.898 -.033 .485 .541 .542 -.485 -.516 .454 -.645 -.131 .788

Variance Explained The Eigenvalues represent the amount of variance explained by a factor and are scaled such that the sum of the Eigenvalues is equal to the total number of factors. Typically factors with Eigenvalues >1.0 are considered significant. The first 4 factors below meet this cut-off and would capture 92.6% of the total variance. We will keep 2 factors, which explain 70.3% of the variance. Component Total % of Variance Cumulative % % of var. Cum. % 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 7.742 2.800 2.060 1.286 .430 .385 .196 .080 .021 8.487E-16 5.793E-16 6.083E-18 -4.952E-17 -1.462E-16 -1.901E-16 51.616 18.667 13.733 8.574 2.865 2.568 1.304 .530 .142 5.658E-15 3.862E-15 4.055E-17 -3.301E-16 -9.749E-16 -1.267E-15 70.283 84.016 92.591 95.456 98.024 99.328 99.858 100.000 6.979 3.563 46.528 46.528 23.755 70.283 Initial Eigenvalues Rotation Sums of Loadings Total Variance Explained

Rotated Component Matrix Output - Loadings Rotated Component Matrix Component 1 2 unreliable roomy prestige highquality lowprofiletires Sporty powerfulengine smoothride tighthandling poorvalue attractive quiet poorlybuilt uncomfortable premiumsound- system -.995 -.803 .864 .955 .668 .250 .594 -.383 .193 -.892 .679 .033 -.936 -.192 .685 .019 -.367 -.361 .116 .342 .887 .707 -.853 .861 .266 .322 -.178 -.044 .442 -.085 Resulting Factor Loadings (“f’s”) This is the two factor solution (each component is a factor) “f’s” represent correlations between the attributes (rows) and factors (columns). These are the coordinates for where the attributes plot in the factor space

Output - Communalities The reported “Extraction” is the proportion of variance in each attribute accounted for by the 2-factor solution This is the sum of the squared loadings for each attribute across the 2-factors e.g., unreliable communality of .991 = unreliable Loadings on F1 and F2 squared = (-0.995)^2 + (0.019^2) Information on “quiet” is not very well captured by the two factor solution. We would need a third or fourth factor to capture the variance in the quiet variable. Initial Extraction unreliable roomy prestige highquality lowprofiletires Sporty powerfulengine smoothride tighthandling poorvalue attractive quiet poorlybuilt uncomfortable premiumsound- system 1.000 .991 .780 .876 .925 .562 .850 .852 .875 .779 .866 .565 .033 .878 .232 .477

Back to Loadings SPSS plots loadings as dots on a perceptual map. You can envision vectors that start at the origin and radiate in the direction of the attribute. A vector on the map indicates both magnitude and direction in the Euclidean space. Vectors are used to geometrically denote attributes of the brands The axes of the map are a special set of vectors suggesting the underlying dimensions that best characterize how customers differentiate between alternatives

Output - SPSS Loading Plot: without rotation

Output - SPSS Loading Plot: with rotation sporty tight handling powerful engine uncomfortable low profile tires attractive high quality unreliable poorly built premium sound system poor value quiet prestige roomy smooth ride

Label Factors Now

Now how to plot brands in this space?

The F1 and F2 are generated in SPSS as new variables Brands SPSS calculates the factor score for each brand (Component scores x standardized attribute scores for each brand). These are the brand relationships that you can plot. F1 F2 ford -1.01336 -0.00729 infiniti 1.13945 -0.05706 cadillac 0.12308 -1.86319 camero -1.03736 1.77516 mercedes 1.09697 0.04509 mazda -0.62771 0.44884 buick -0.70077 -1.17192 porsche 1.15774 0.77549 kia -1.10467 -0.28417 audi 0.96664 0.33905 The F1 and F2 are generated in SPSS as new variables

camero porsche mazda audi ford mercedes infiniti kia buick cadillac

For next time You will do your own attribute based map using SPSS We will talk more about applications of perceptual maps Guest speaker: Caroline Klompmaker from Burt’s Bees

Optional slides

SPSS Factor Analysis Process (the “math slide” – optional) Will evaluate as many factors as there are attributes (n). Choose factors such that starting with the first factor (F1), it explains as much of the total variance as possible. Choose the second factor (F2) to be orthogonal (uncorrelated) to the first and explain as much of the remaining variance as possible. Continue to the third, fourth, to the nth factor. Process can be Principle Components Analysis or some other method like Maximum Likelihood. The process will choose the “a” weights in such a way that the factors, the “F’s”, are optimal – where optimality is described above. The x’s are the attribute ratings.

Rotated Component Matrix Output - Loadings Component Matrix Rotated Component Matrix Component 1 2 unreliable roomy prestige highquality lowprofiletires Sporty powerfulengine smoothride tighthandling poorvalue attractive quiet poorlybuilt uncomfortable premiumsound- system -.908 -.883 .652 .924 .748 .579 .824 -.688 .516 -.925 .751 -.040 -.878 -.002 .597 .408 -.022 -.671 -.269 .052 .718 .417 -.634 .716 .106 .029 -.177 .328 .482 -.348 Component 1 2 unreliable roomy prestige highquality lowprofiletires Sporty powerfulengine smoothride tighthandling poorvalue attractive quiet poorlybuilt uncomfortable premiumsound- system -.995 -.803 .864 .955 .668 .250 .594 -.383 .193 -.892 .679 .033 -.936 -.192 .685 .019 -.367 -.361 .116 .342 .887 .707 -.853 .861 .266 .322 -.178 -.044 .442 -.085 Resulting Factor Loadings (“f’s”) This is the two factor solution (each component is a factor) “f’s” represent correlations between the attributes (rows) and factors (columns). These are the coordinates for where the attributes plot in the factor space

Output - Factor Scores Component 1 2 unreliable roomy prestige highquality lowprofiletires Sporty powerfulengine smoothride tighthandling poorvalue attractive quiet poorlybuilt uncomfortable premiumsound- system -.165 -.102 .172 .147 .082 -.032 .039 .007 -.039 -.125 .085 .020 -.150 -.068 .120 .088 -.052 -.187 -.041 .055 .265 .179 -.243 .261 -.012 .048 -.060 .063 .158 -.084 Values in the original data can be approximated by linear combinations of other factors – the “z’s” are the factor scores.

Brands SPSS calculates the factor score for each brand (Component scores x standardized attribute scores for each brand). These are the brand relationships that you can plot. Standardized attribute scores (gotten from descriptives) Component 1 2 unreliable roomy prestige highquality lowprofiletires Sporty powerfulengine smoothride tighthandling poorvalue attractive quiet poorlybuilt uncomfortable premiumsound- system -.165 -.102 .172 .147 .082 -.032 .039 .007 -.039 -.125 .085 .020 -.150 -.068 .120 .088 -.052 -.187 -.041 .055 .265 .179 -.243 .261 -.012 .048 -.060 .063 .158 -.084 unreliable roomy prestige highquality lowprofiletires sporty powerfulengine smoothride tighthandling poorvalue attractive quiet poorlybuilt uncomfortable premiumsoundsystem 1.055 1.70732 -0.65364 -1.33872 -0.59824 -0.95479 -0.47626 0.07383 0.25823 1.29036 0.06705 -0.62938 0.54444 0.27993 -0.3386 -1.00364 -0.73171 0.98046 0.9973 1.39589 0.09156 0.67831 -0.29532 0.85414 -0.93705 0.20115 -0.48301 -0.88147 -1.22739 1.1851 -0.23161 0.71364 0.35196 -0.44025 -0.02849 -1.34718 -1.63083 1.42738 -1.92677 0.6759 -0.06705 0.15555 1.0551 -0.1897 -1.78494 -0.88948 0.39883 1.79187 0.24535 -0.78752 0.44548 -0.73753 -1.21485 0.8037 1.57192 -0.1693 -0.82204 1.10616 1.08715 -0.17093 -0.66447 -1.47469 1.13982 1.2734 -1.52962 0.49526 0.54042 0.89431 -0.27654 -0.26056 -0.88311 0.35314 -0.33194 1.05277 -0.01536 0.60344 1.41977 0.93333 -0.5814 -1.693 0.66909 0.3523 -0.69321 -1.4865 2.04263 -1.33086 0.52229 -2.0785 0.39519 1.14126 -1.1851 -1.13231 -1.45438 0.60336 1.17699 1.96564 1.13791 1.11127 -1.03362 0.45686 -0.70662 -1.36122 -0.62222 -0.15073 1.18378 0.53297 -1.40784 -1.15903 -1.02555 -0.0433 0.19688 -0.73495 -0.87163 1.32221 -1.01206 -0.5079 -1.00271 0.72906 0.48394 1.25559 -1.09066 0.8343 -1.27036 0.7106 0.3386 X F1 F2 ford -1.01336 -0.00729 infiniti 1.13945 -0.05706 cadillac 0.12308 -1.86319 camero -1.03736 1.77516 mercedes 1.09697 0.04509 mazda -0.62771 0.44884 buick -0.70077 -1.17192 porsche 1.15774 0.77549 kia -1.10467 -0.28417 audi 0.96664 0.33905 The F1 and F2 are generated in SPSS as new variables =