Presentation on theme: "Ninness, C. , Lauter, J. Coffee, M. , Clary, L. , Kelly, E. , Rumph, M"— Presentation transcript:
1Ninness, C. , Lauter, J. Coffee, M. , Clary, L. , Kelly, E. , Rumph, M Ninness, C., Lauter, J. Coffee, M., Clary, L., Kelly, E., Rumph, M., Rumph, R., Kyle, R., & Ninness, S. (2012) Behavioral and Biological Neural Network Analyses: A Common Pathway toward Pattern Recognition and Prediction. The Psychological Record, 62, TPR_VOL62 NO4.pdfEPS 651 Multivariate Analysis Factor Analysis, Principal Components Analysis, and Neural Network Analysis (Self-Organizing Maps)For next week:Continue with T&F Chapter 13 and please read the study below posted on our webpage:
2T&F Chapter 13 --> 13.5.3 page 642 Several slides are based on material from the UCLA SPSS Academic Technology Services
3Principal components analysis (PCA) and Factor Analysis are methods of data reduction: Suppose that you have a dozen variables that are correlated. You might use principal components analysis to reduce your 12 measures to a few principal components. For example, you may be most interested in obtaining the component scores (which are variables that are added to your data set) and/or to look at the dimensionality of the data. For example, if two components are extracted and those two components accounted for 68% of the total variance, then we would say that two dimensions in the component space account for 68% of the variance. Unlike factor analysis, principal components analysis is not usually used to identify underlying latent variables. [direct quote from below].
4FA and PCA: Data reduction methods If raw data are used, the procedure will create the original correlation matrix or covariance matrix, as specified by the user. If the correlation matrix is used, the variables are standardized and the total variance will equal the number of variables used in the analysis (because each standardized variable has a variance equal to 1). If the “covariance matrix” is used, the variables will remain in their original metric. However, one must take care to use variables whose variances and scales are similar. Unlike factor analysis, which analyzes the common variance, the original matrix in a principal components analysis analyzes the total variance. Also, principal components analysis assumes that each original measure is collected without measurement error [direct quote].
5Spin ControlFactor analysis is a method of data reduction also – forgiving relative to PCA Factor Analysis seeks to find underlying unobservable (latent) variables that are reflected in the observed variables (manifest variables). There are many different methods that can be used to conduct a factor analysis (such as principal axis factor, maximum likelihood, generalized least squares, unweighted least squares). There are also many different types of rotations that can be done after the initial extraction of factors, including orthogonal rotations, such as varimax and equimax, which impose the restriction that the factors cannot be correlated, and oblique rotations, such as promax, which allow the factors to be correlated with one another. You also need to determine the number of factors that you want to extract. Given the number of factor analytic techniques and options, it is not surprising that different analysts could reach very different results analyzing the same data set. However, all analysts are looking for a simple structure. A simple structure is pattern of results such that each variable loads highly onto one and only one factor. [direct quote]
6FA vs. PCA conceptuallyFA produces factors PCA produces components
7Kinds of Research Questions re PCA and FA What does each factor mean? Interpretation? Your callWhat is the percentage of variance in the data accounted for by the factors? SPSS & psyNet will show youWhich factors account for the most variance? SPSS & psyNetHow well does the factor structure fit a given theory? Your callWhat would each subject’s score be if they could be measured directly on the factors? Excellent question!
8Before you can even start to answer these questions using FA should be > .6should be < .05Kaiser-Meyer-Olkin Measure of Sampling Adequacy - This measure varies between 0 and 1, and values closer to 1 are better. A value of .6 is a suggested minimum. It answers the question: Is there enough data relative to the number of variables.Bartlett's Test of Sphericity - This tests the null hypothesis that the correlation matrix is an identity matrix. An identity matrix is a matrix in which all of the diagonal elements are 1 and all off diagonal elements are 0. Ostensibly, you want to reject this null hypothesis. This, of course, is psychobabble.Taken together, these two tests provide a minimum standard which should be passed before a factor analysis (or a principal components analysis) should be conducted.
9What is a Common Factor?It is an abstraction, a “hypothetical construct” that relates to at least two of our measurement variables into a factorIn FA, psychometricians / statisticians try to estimate the common factors that contribute to the variance in a set of variables.Is this an act of logical conclusion, a creation, or a figment of a psychometrician’s imagination ? Depends on who you ask
10What is a Unique Factor?It is a factor that contributes to the variance in only one variable.There is one unique factor for each variable.The unique factors are unrelated to one another and unrelated to the common factors.We want to exclude these unique factors from our solution.Seems reasonable … right?
11AssumptionsFactor analysis needs large samples and it is one of the only draw backsThe more reliable the correlations are the smaller the number of subjects neededNeed enough subjects for stable estimates -- How many is enough
12Assumptions Take home hint: 50 very poor, 100 poor, 200 fair, 300 good, 500 very good and excellentShoot for minimum of 300 usuallyMore highly correlated markers fewer subjects
13AssumptionsNo outliers – obvious influence on correlations would bias resultsMulticollinearityIn PCA it is not problem; no inversionsIn FA, if det(R) or any eigenvalue approaches 0 -> multicollinearity is likely
14The above Assumptions at Work: Note that the metric for all these variables is the same (since they employed a rating scale). So do we do we run the FA as correlation or covariance matrices / does it matter?
15Sample Data Set From Chapter 13 (p Sample Data Set From Chapter 13 (p. 617) Tabacknick and Fidell Principal Components and Factor AnalysisKeep in mind, multivariate normality is assumed when statistical inference is used to determine the number of factors. The above dataset is far too small to fulfill the normality assumption. However, even large datasets frequently violate this assumption and compromise the analysis.Multivariate normality also implies that relationships among pairs of variables are linear. The analysis is degraded when linearity fails, because correlation measures linear relationship and does not reflect nonlinear relationship. Linearity among variables is assessed through visual inspection of scatterplots.
16Equations – Extractions - Components Correlation matrix w/ 1s in the diagLarge correlation between Cost and Lift and another between Depth and PowderLooks like two possible factors – why?
18Are you sure about this?L=V’RV => L = V’ R V EigenValueMatrix = TransposeEigenVectorMatrix * CorMat * EigenVecMat We are reducing to a few factors which duplicate the matrix? Does this seem reasonable?
19Equations – Extraction - Obtaining components In a two-by-two matrix we derive eigenvalues with two eigenvectors each containing two elements In a four-by-four matrix we derive eigenvalues with eigenvectors each containing four elementsL=V’RV It is important to know how L is constructedWhere L is the eigenvalue matrix and V is the eigenvector matrix.This diagonalized the R matrix and reorganized the variance into eigenvaluesA 4 x 4 matrix can be summarized by 4 numbers instead of 16.
20it simply becomes a longer polynomial Remember this?-- ( 4 * 1 )( ) * ( )- ( 1 * 4 )2(5 * 2)= 02With a two-by-two matrix we derive eigenvalues with two eigenvectors each containing two elements With a four-by-four matrix we derive eigenvalues with eigenvectors each containing four elementsit simply becomes a longer polynomial
21an equation of the second degree with two roots [eigenvalues] - ( 4 * 1 )( ) * ( )- ( 1 * 4 )2(5 * 2)= 0= 02DeterminantWhere a = 1, b = -7 and c = 6- b-+b ac=i2 a(-7) (1) * (6)- 7-( )=+= 6i2 (1)(-7) (1) * (6)- 7-( )-== 1i2 (1)= 6= 112an equation of the second degree with two roots [eigenvalues]
24Equations – Extractions – Obtaining components R=VLV’SPSS matrix outputCareful here is correct,but it appears as a “2” in the text
25Obtaining Our original correlation matrix L = the eigenvalue matrix V’
26Bartlett's Test of Sphericity - This tests the null hypothesis that the correlation matrix is an identity matrix. An identity matrix is matrix in which all of the diagonal elements are 1 and all off diagonal elements are 0.
28We have “extracted” two Factors from four variables Other than the magic “2” below – this is a decent exampleWe have “extracted” twoFactors from four variablesUsing a small data set1.91
29Following SPSS Extraction and Rotation and all that jazz… in this case, not much difference [others data sets show big change]Here we see that Factor 1 is mostly Depth and Powder (Snow Condition Factor)Factor 2 is mostly Cost and Lift, which is a Resort FactorBoth factors have complex loadings
30Using SPSS 12, SPSS 20 and psyNet.SOM This is a variation on your homework. Just use your own numbers and replicate the process.(we may use this hypothetical data as part of a study)
31Here is an easier way than doing it by hand: Arrange data in Excel Format as below: SPSS 20
45Matching psyNet PCA correlation matrix with SPSS FA This part is the same but the rest of PCA goes in an entirely different direction
46Kaiser's measure of sampling adequacy: Values of Kaiser's measure of sampling adequacy: Values of .6 and above are required for a good FA.Remember these guys?An MSA of .9 is marvelous, .4 is not too impressive – Hey it was a small sampleNormally, variables with small MSAs should be deleted
47Looks like two factors can be isolated/extracted which ones Looks like two factors can be isolated/extracted which ones? and what shall we call them?
48Here they are again // they have eigenvalues > 1 We are reducing to a few factors which duplicate the matrix?
51SPSS will provide an Orthogonal Rotation without your help – look at the iterations
52Extraction, Rotation, and Meaning of Factors Orthogonal Rotation [assume no correlation among the factors]Loading Matrix – correlation between each variable and the factorOblique Rotation [assumes possible correlations among the factors]Factor Correlation Matrix – correlation between the factorsStructure Matrix – correlation between factors and variables
53Oblique Rotations – Fun but not today Factor extraction is usually followed by rotation in order to maximize large correlation and minimize small correlationsRotation usually increases simple structure and interpretability.The most commonly used is the Varimax variance maximizing procedure which maximizes factor loading variance
54Rotating your axis “orthogonally” ~ sounds painfully chiropractic Where are your components located onthese graphs?What are the upper andlower limitson each of theseaxes?Cost and Lift may be a factor,but they are polar opposites
55Abbreviated Equations Factor weight matrix [B] is found by dividing the loading matrix [A] by the correlation matrix [R-1].See matrix outputFactors scores [F] are found by multiplying the standardized scores [Z] for each individual by the factor weight matrix [B]and adding them up.
56Abbreviated Equations The specific goals of PCA or FA are to summarize patterns of correlations among observed variables, to reduce a large number of observed variables to a smaller number of factors, to provide an operational definition (a regression equation) for an underlying process by using observed variables to test a theory about the nature of underlying processes.
57Standardized variables as factors You can also estimate what each subject would score on the “standardized variables.” This is a revealing procedure—often overlooked.Standardized variables as factors
58Predictions based on Factor analysis: Standard-Scores
59Predictions based on Factor analysis: Standard-Scores Interesting stuff… what about cost?
60Predictions based on Factor analysis: Standard-Scores And this is supposed to represent ?
63Transpose data to analyze by class/factors 4 rows 4 columnsIn CSV format
64SOM Classification of Ski Data SOM classification 1: Depth and Powder across 5 SSNice match with FA 1
65SOM classification 2: Cost across 5 SS Class/Factor ??SOM classification 2: Cost across 5 SSSOM classification 3: Lift across 5 SSLiftClass/FactorNear match with FA 2
66Factor 1: Appears to address Depth and PowderSOM classification 1: Depth and Powder across 5 SSNice match with FA 1This could be placed into a logistic regression and predict with reasonable accuracy
67Factor 2: Appears to address Lift SOM classification 3: Lift across 5 SS
68Predictions based on Factor analysis: Standard-Scores Factor Analysis Factor 3: ??SOM classification 2: Cost across 5 SS
69Center for Machine Learning and Intelligent Systems Iris SetosaIris VersicolourIris VirginicaThis dataset has provided the foundation for multivariate statistics and machine learning