You are on page 1of 3

Interpretation of the Output Descriptive Statistics The first output from the analysis is a table of descriptive statistics for

all the variables under investigation. Typically, the mean, standard deviation and number of respondents (N) who participated in the survey are given. Looking at the mean, one can conclude that difficulty in understanding the technical aspects of computers is the most important variable that influence customers to buy the product. It has the highest mean of 2.68 compared to learning to operate computers is like learning any new skill - the more you practice, the better you become and I look forward to using a computer on my job the lowest mean of 1.18 The Correlation matrix The next output from the analysis is the correlation coefficient. A correlation matrix is simply a rectangular array of numbers which gives the correlation coefficients between a single variable and every other variable in the investigation. The correlation coefficient between a variable and itself is always 1 (singularity). If < 0.3 little, if any (Probability less respondent size insufficient) If > 0.9 multicollinearity Determinant = .001 If >.0001 no multi co linearity If <.001 multi co linearity Kaiser-Meyer-Olkin (KMO) and Bartlett's Test (measures strength of the relationship among variables) The KMO measures the sampling adequacy which should be greater than 0.5 for a satisfactory factor analysis to proceed. If any pair of variables has a value less than this, consider dropping one of them from the analysis. The off-diagonal elements should all be very small (close to zero) in a good model. Looking at the table result, the KMO measure is 0.433. There is no significant answer to question How many cases do I need to factor analysis?, and methodologies differ. A common rule is to suggest that a researcher has at least 10-15 participants per variable. Fiedel (2005) says that in general over 300 cases for sampling analysis is probably adequate. There is universal agreement that factor analsis is inappropriate when sample size is below 50. Kaisen (1974) recommend 0.5 as minimum (barely accepted), values between 0.7-0.8 acceptable, and values above 0.9 are superb. Bartlett's test is another indication of the strength of the relationship among variables.From the same table, we can see that the Bartlett's test of sphericity is significant That is, its associated probability is less than 0.05. In fact, it is actually 0.000, i.e. the significance level is small enough to reject the null hypothesis. This means that correlation matrix is not an identity matrix

Anti-image Matrices Is to proof sample adequacy. Since our sample is not enough the AIM mostly less than 0.5 Communalities The next item from the output is a table of communalities which shows how much of the variance in the variables has been accounted for by the extracted factors. For instance over 91.7% of the variance in learning to operate computers is like learning any new skill - the more you practice, the better you become is accounted for while 47.1% of the variance in I dislike working with machines that are smarter than I am is accounted for. Total Variance Explained The next item shows all the factors extractable from the analysis along with their eigenvalue, the percent of variance attributable to each factor, and the cumulative variance of the factor and the previous factors. Notice that the first factor accounts for 16.618% of the variance, the second 12.916% and the third 11.467%. All the remaining factors are not significant. Component Matrix The table result shows the loadings of the eighteen variables on the three factors extracted. The higher the absolute value of the loading, the more the factor contributes to the variable. The gap on the table represent loadings that are less than 0.5, this makes reading the table easier. We suppressed all loadings less than 0.4. Avoid negative figure and always choose the highest if redundant. Obviously there are sub divided by seven groups. Rotated Component Matrix The idea of rotation is to reduce the number factors on which the variables under investigation have high loadings. Rotation does not actually change anything but makes the interpretation of the analysis easier. Looking at the table below, we can see that learning computer is substantially loaded on Factor (Component) 1 while lack of self confidence is substantially loaded on Factor 2. All the remaining variables are substantially loaded on Factor 3(ignorance), 4 (Scary), 5 (willing to learn), 6 (undecided) and 7 (opportunist). These factors can be used as variables for further analysis.

Conclusion
Component Transformation Matrix Component 1 2 3 4 5 6 7 1 .364 -.819 .331 -.099 -.172 .182 .120 2 .527 -.051 -.456 .713 .007 -.032 -.044 3 .258 .435 .759 .328 -.171 .145 -.098 4 .475 .293 -.079 -.353 -.198 -.301 .654 5 .381 .225 -.242 -.407 .051 .704 -.286 6 .390 -.030 .143 -.257 .482 -.535 -.491 7 -.012 -.006 .143 .134 .816 .268 .472

Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization.

You might also like