Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ninness, C. , Lauter, J. Coffee, M. , Clary, L. , Kelly, E. , Rumph, M

Similar presentations


Presentation on theme: "Ninness, C. , Lauter, J. Coffee, M. , Clary, L. , Kelly, E. , Rumph, M"— Presentation transcript:

1 Ninness, C. , Lauter, J. Coffee, M. , Clary, L. , Kelly, E. , Rumph, M
Ninness, C., Lauter, J. Coffee, M., Clary, L., Kelly, E., Rumph, M., Rumph, R., Kyle, R., & Ninness, S. (2012) Behavioral and Biological Neural Network Analyses: A Common Pathway toward Pattern Recognition and Prediction. The Psychological Record, 62, TPR_VOL62 NO4.pdf EPS 651 Multivariate Analysis Factor Analysis, Principal Components Analysis, and Neural Network Analysis (Self-Organizing Maps) For next week: Continue with T&F Chapter 13 and please read the study below posted on our webpage:

2 T&F Chapter 13 --> 13.5.3 page 642
Several slides are based on material from the UCLA SPSS Academic Technology Services

3 Principal components analysis (PCA) and Factor Analysis are methods of data reduction:  Suppose that you have a dozen variables that are correlated.  You might use principal components analysis to reduce your 12 measures to a few principal components.  For example, you may be most interested in obtaining the component scores (which are variables that are added to your data set) and/or to look at the dimensionality of the data.  For example, if two components are extracted and those two components accounted for 68% of the total variance, then we would say that two dimensions in the component space account for 68% of the variance.  Unlike factor analysis, principal components analysis is not usually used to identify underlying latent variables.  [direct quote from below].

4 FA and PCA: Data reduction methods
If raw data are used, the procedure will create the original correlation matrix or covariance matrix, as specified by the user.  If the correlation matrix is used, the variables are standardized and the total variance will equal the number of variables used in the analysis (because each standardized variable has a variance equal to 1).  If the “covariance matrix” is used, the variables will remain in their original metric.  However, one must take care to use variables whose variances and scales are similar.  Unlike factor analysis, which analyzes the common variance, the original matrix in a principal components analysis analyzes the total variance.  Also, principal components analysis assumes that each original measure is collected without measurement error [direct quote]. 

5 Spin Control Factor analysis is a method of data reduction also – forgiving relative to PCA  Factor Analysis seeks to find underlying unobservable (latent) variables that are reflected in the observed variables (manifest variables).  There are many different methods that can be used to conduct a factor analysis (such as principal axis factor, maximum likelihood, generalized least squares, unweighted least squares). There are also many different types of rotations that can be done after the initial extraction of factors, including orthogonal rotations, such as varimax and equimax, which impose the restriction that the factors cannot be correlated, and oblique rotations, such as promax, which allow the factors to be correlated with one another.  You also need to determine the number of factors that you want to extract.  Given the number of factor analytic techniques and options, it is not surprising that different analysts could reach very different results analyzing the same data set.  However, all analysts are looking for a simple structure.  A simple structure is pattern of results such that each variable loads highly onto one and only one factor.  [direct quote]

6 FA vs. PCA conceptually FA produces factors PCA produces components

7 Kinds of Research Questions re PCA and FA
What does each factor mean? Interpretation? Your call What is the percentage of variance in the data accounted for by the factors? SPSS & psyNet will show you Which factors account for the most variance? SPSS & psyNet How well does the factor structure fit a given theory? Your call What would each subject’s score be if they could be measured directly on the factors? Excellent question!

8 Before you can even start to answer these questions using FA
should be > .6 should be < .05 Kaiser-Meyer-Olkin Measure of Sampling Adequacy - This measure varies between 0 and 1, and values closer to 1 are better.  A value of .6 is a suggested minimum.  It answers the question: Is there enough data relative to the number of variables. Bartlett's Test of Sphericity - This tests the null hypothesis that the correlation matrix is an identity matrix.  An identity matrix is a matrix in which all of the diagonal elements are 1 and all off diagonal elements are 0.  Ostensibly, you want to reject this null hypothesis.  This, of course, is psychobabble. Taken together, these two tests provide a minimum standard which should be passed before a factor analysis (or a principal components analysis) should be conducted.

9 What is a Common Factor? It is an abstraction, a “hypothetical construct” that relates to at least two of our measurement variables into a factor In FA, psychometricians / statisticians try to estimate the common factors that contribute to the variance in a set of variables. Is this an act of logical conclusion, a creation, or a figment of a psychometrician’s imagination ? Depends on who you ask

10 What is a Unique Factor? It is a factor that contributes to the variance in only one variable. There is one unique factor for each variable. The unique factors are unrelated to one another and unrelated to the common factors. We want to exclude these unique factors from our solution. Seems reasonable … right?

11 Assumptions Factor analysis needs large samples and it is one of the only draw backs The more reliable the correlations are the smaller the number of subjects needed Need enough subjects for stable estimates -- How many is enough

12 Assumptions Take home hint:
50 very poor, 100 poor, 200 fair, 300 good, 500 very good and excellent Shoot for minimum of 300 usually More highly correlated markers fewer subjects

13 Assumptions No outliers – obvious influence on correlations would bias results Multicollinearity In PCA it is not problem; no inversions In FA, if det(R) or any eigenvalue approaches 0 -> multicollinearity is likely

14 The above Assumptions at Work:
Note that the metric for all these variables is the same (since they employed a rating scale). So do we do we run the FA as correlation or covariance matrices / does it matter?

15 Sample Data Set From Chapter 13 (p
Sample Data Set From Chapter 13 (p. 617) Tabacknick and Fidell Principal Components and Factor Analysis Keep in mind, multivariate normality is assumed when statistical inference is used to determine the number of factors. The above dataset is far too small to fulfill the normality assumption. However, even large datasets frequently violate this assumption and compromise the analysis. Multivariate normality also implies that relationships among pairs of variables are linear. The analysis is degraded when linearity fails, because correlation measures linear relationship and does not reflect nonlinear relationship. Linearity among variables is assessed through visual inspection of scatterplots.

16 Equations – Extractions - Components
Correlation matrix w/ 1s in the diag Large correlation between Cost and Lift and another between Depth and Powder Looks like two possible factors – why?

17

18 Are you sure about this? L=V’RV => L = V’ R V EigenValueMatrix = TransposeEigenVectorMatrix * CorMat * EigenVecMat We are reducing to a few factors which duplicate the matrix? Does this seem reasonable?

19 Equations – Extraction - Obtaining components
In a two-by-two matrix we derive eigenvalues with two eigenvectors each containing two elements In a four-by-four matrix we derive eigenvalues with eigenvectors each containing four elements L=V’RV It is important to know how L is constructed Where L is the eigenvalue matrix and V is the eigenvector matrix. This diagonalized the R matrix and reorganized the variance into eigenvalues A 4 x 4 matrix can be summarized by 4 numbers instead of 16.

20 it simply becomes a longer polynomial
Remember this? - - ( 4 * 1 ) ( ) * ( ) - ( 1 * 4 ) 2 (5 * 2) = 0 2 With a two-by-two matrix we derive eigenvalues with two eigenvectors each containing two elements With a four-by-four matrix we derive eigenvalues with eigenvectors each containing four elements it simply becomes a longer polynomial

21 an equation of the second degree with two roots [eigenvalues]
- ( 4 * 1 ) ( ) * ( ) - ( 1 * 4 ) 2 (5 * 2) = 0 = 0 2 Determinant Where a = 1, b = -7 and c = 6 - b - + b ac = i 2 a (-7) (1) * (6) - 7 - ( ) = + = 6 i 2 (1) (-7) (1) * (6) - 7 - ( ) - = = 1 i 2 (1) = 6 = 1 1 2 an equation of the second degree with two roots [eigenvalues]

22

23 From Eigenvalues to Eigenvectors

24 Equations – Extractions – Obtaining components
R=VLV’ SPSS matrix output Careful here is correct, but it appears as a “2” in the text

25 Obtaining Our original correlation matrix L = the eigenvalue matrix V’

26 Bartlett's Test of Sphericity - This tests the null hypothesis that the correlation matrix is an identity matrix.  An identity matrix is matrix in which all of the diagonal elements are 1 and all off diagonal elements are 0. 

27 Equations – Extraction – Obtaining Components

28 We have “extracted” two Factors from four variables
Other than the magic “2” below – this is a decent example We have “extracted” two Factors from four variables Using a small data set 1.91

29 Following SPSS Extraction and Rotation and all that jazz… in this case, not much difference [others data sets show big change] Here we see that Factor 1 is mostly Depth and Powder (Snow Condition Factor) Factor 2 is mostly Cost and Lift, which is a Resort Factor Both factors have complex loadings

30 Using SPSS 12, SPSS 20 and psyNet.SOM
This is a variation on your homework. Just use your own numbers and replicate the process. (we may use this hypothetical data as part of a study)

31 Here is an easier way than doing it by hand: Arrange data in Excel Format as below: SPSS 20

32 Select Data Reduction: SPSS 12

33 Select Data Reduction: SPSS 20

34 Select Variables Descriptives: SPSS 12

35 Select Variables and Descriptives: SPSS 20

36 Start with a basic run using Principal Components: SPSS 12
Eigenvalues over 1

37 Start with a basic run using Principal Components: SPSS 12
Fixed number of factors

38 Select Varimax: SPSS 12

39 Select Varimax: SPSS 20

40 Under Options, select exclude cases likewise and sort by size: SPSS 12

41 Under Options, select exclude cases likewise and sort by size: SPSS 20

42 Under Scores, select “save variables” and “display matrix”: SPSS 20

43 Watch what pops out of your oven A real time saver

44

45 Matching psyNet PCA correlation matrix with SPSS FA
This part is the same but the rest of PCA goes in an entirely different direction

46 Kaiser's measure of sampling adequacy: Values of
Kaiser's measure of sampling adequacy: Values of .6 and above are required for a good FA. Remember these guys? An MSA of .9 is marvelous, .4 is not too impressive – Hey it was a small sample Normally, variables with small MSAs should be deleted

47 Looks like two factors can be isolated/extracted which ones
Looks like two factors can be isolated/extracted which ones? and what shall we call them?

48 Here they are again // they have eigenvalues > 1
We are reducing to a few factors which duplicate the matrix?

49 Fairly Close

50 Rotations – Nice hints here

51 SPSS will provide an Orthogonal Rotation without your help – look at the iterations

52 Extraction, Rotation, and Meaning of Factors
Orthogonal Rotation [assume no correlation among the factors] Loading Matrix – correlation between each variable and the factor Oblique Rotation [assumes possible correlations among the factors] Factor Correlation Matrix – correlation between the factors Structure Matrix – correlation between factors and variables

53 Oblique Rotations – Fun but not today
Factor extraction is usually followed by rotation in order to maximize large correlation and minimize small correlations Rotation usually increases simple structure and interpretability. The most commonly used is the Varimax variance maximizing procedure which maximizes factor loading variance

54 Rotating your axis “orthogonally” ~ sounds painfully chiropractic
Where are your components located on these graphs? What are the upper and lower limits on each of these axes? Cost and Lift may be a factor, but they are polar opposites

55 Abbreviated Equations
Factor weight matrix [B] is found by dividing the loading matrix [A] by the correlation matrix [R-1]. See matrix output Factors scores [F] are found by multiplying the standardized scores [Z] for each individual by the factor weight matrix [B]and adding them up.

56 Abbreviated Equations
The specific goals of PCA or FA are to summarize patterns of correlations among observed variables, to reduce a large number of observed variables to a smaller number of factors, to provide an operational definition (a regression equation) for an underlying process by using observed vari­ables to test a theory about the nature of underlying processes.

57 Standardized variables as factors
You can also estimate what each subject would score on the “standardized variables.” This is a revealing procedure—often overlooked. Standardized variables as factors

58 Predictions based on Factor analysis: Standard-Scores

59 Predictions based on Factor analysis: Standard-Scores
Interesting stuff… what about cost?

60 Predictions based on Factor analysis: Standard-Scores
And this is supposed to represent ?

61 SOM Classification of Ski Data
Variables

62 Transpose data before saving as a CSV file.

63 Transpose data to analyze by class/factors
4 rows 4 columns In CSV format

64 SOM Classification of Ski Data
SOM classification 1: Depth and Powder across 5 SS Nice match with FA 1

65 SOM classification 2: Cost across 5 SS
Class/Factor ?? SOM classification 2: Cost across 5 SS SOM classification 3: Lift across 5 SS Lift Class/Factor Near match with FA 2

66 Factor 1: Appears to address
Depth and Powder SOM classification 1: Depth and Powder across 5 SS Nice match with FA 1 This could be placed into a logistic regression and predict with reasonable accuracy

67 Factor 2: Appears to address Lift
SOM classification 3: Lift across 5 SS

68 Predictions based on Factor analysis: Standard-Scores
Factor Analysis Factor 3: ?? SOM classification 2: Cost across 5 SS

69 Center for Machine Learning and Intelligent Systems
Iris Setosa Iris Versicolour Iris Virginica This dataset has provided the foundation for multivariate statistics and machine learning

70 Transpose data before saving as a CSV file.

71 Transpose data to analyze by class/factors
4 rows 150 columns In CSV format

72 Factor Analysis: Factor 1
sepal length in cm sepal width in cm petal length in cm Factor Analysis: Factor 1 Factor Analysis: Factor 2 petal width in cm

73 SOM Neural Network: Class 1
sepal length in cm sepal width in cm petal length in cm SOM Neural Network: Class 1 SOM Neural Network: Class 2 petal width in cm

74 Factor Analysis: Factor 1
sepal length in cm sepal width in cm petal length in cm SOM Neural Network: Class 1 This could be placed into a logistic regression and predict with near perfect accuracy

75 Really ?? Look at the original
Everybody but psychologists seem to understand this


Download ppt "Ninness, C. , Lauter, J. Coffee, M. , Clary, L. , Kelly, E. , Rumph, M"

Similar presentations


Ads by Google