Professional Documents
Culture Documents
0
1. Raw data should be saved in SPSS (.sav) format.
2. Open AMOS Graphics from the Start Menu.
5. You can use the View Data button to check your data (it will appear in SPSS).
6. Click OK to go on.
updated 5/20/06
7. Use the List Variables in Dataset button on the left-hand side to see the
variables in your dataset:
8. Drag and drop the observed variables that you want to include in your model onto
the gray workspace:
updated 5/20/06
9. Use the oval button on the left to draw in any unobserved (latent) variables that
you want to include in your model. Double click on the new variable (the oval
you drew) in order to give it a name. Each variable must be named.
10. Use the arrow tool on the left to draw paths between variables. The path always
goes in the direction of latent variable (oval) to observed variable (rectangle).
updated 5/20/06
12. Add error variances (residuals) for each observed variable. These are unique,
unobserved variables for each observed variable. This is an important step, and
AMOS will give you a warning message and be unable to proceed if you omit it.
updated 5/20/06
13. Enter regression weights for each of the relationships between observed and
unobserved variables. (Unless you have reason to believe that some associations
will be stronger than others, use 1 for all regression weights. If you believe, e.g.,
that one association is 2x as strong as another, use 2 for the weight that is twice as
strong, and 1 for the other). Note that it is not necessary to specify regression
weights for every relationship. Specify them for each error term, and specify the
first regression weight for each unobserved to observed variable, as shown below:
updated 5/20/06
14. In the View/Set menu, select Analysis Properties. Go to the Output tab.
Select Standardized Estimates.
15. Run the model using the calculate estimates button on the left-side control
panel:
AMOS stands for Analysis of MOment Structures. AMOS calculates estimates for
the parameters and fits the model based on mean and covariance structures, to make
these fit the data as closely as possible.
UCDHSC Center for Nursing Research
Page 6 of 16
updated 5/20/06
16. View the results by clicking on the View Output button. This will show you the
solution for the model. Select standardized or unstandardized estimates.
17. Test the model against other plausible models by clicking on the view text
button to see all statistical outputs from the procedure:
updated 5/20/06
18. Notes for Model shows the chi-square statistic that tests for significant
deviations between the model and the data. In this case, the p-value is significant,
meaning that the model is not a good fit for the data (chi-square is used as a
badness of fit statistic in SEM).
Notes for Model (Default model)
Computation of degrees of freedom (Default model)
55
21
34
updated 5/20/06
22. The chi-square does not have to indicate a complete fit in order to accept the model
with a large sample, the chi-squares p-value is always likely to be small. The
following alternative fit indices are provided in the text output. AMOS automatically
compares your model to a saturated model (all variables correlated with all others)
and to an independence model (all variables uncorrelated with all others), to ensure
that your model is a better fit. The various fit indices are described below:
Model Fit Summary
CMIN
Model
Default model
Saturated model
Independence model
NPAR
19
45
9
CMIN
38.177
.000
429.884
DF
26
0
36
P
.058
CMIN/DF
1.468
.000
11.941
RMR, GFI
Model
Default model
Saturated model
RMR
.433
.000
GFI
.952
1.000
AGFI
.918
PGFI
.550
updated 5/20/06
Model
Independence model
RMR
2.521
GFI
.532
AGFI
.415
PGFI
.425
NFI
Delta1
.911
1.000
.000
RFI
rho1
.877
IFI
Delta2
.970
1.000
.000
TLI
rho2
.957
Baseline Comparisons
Model
Default model
Saturated model
Independence model
.000
.000
CFI
.969
1.000
.000
Parsimony-Adjusted Measures
Model
Default model
Saturated model
Independence model
PRATIO
.722
.000
1.000
PNFI
.658
.000
.000
PCFI
.700
.000
.000
NCP
12.177
.000
393.884
LO 90
.000
.000
330.800
NCP
Model
Default model
Saturated model
Independence model
HI 90
32.787
.000
464.416
FMIN
Model
Default model
Saturated model
Independence model
FMIN
.219
.000
2.471
F0
.070
.000
2.264
LO 90
.000
.000
1.901
HI 90
.188
.000
2.669
RMSEA
Model
Default model
Independence model
RMSEA
.052
.251
LO 90
.000
.230
HI 90
.085
.272
PCLOSE
.432
.000
HOELTER
Model
Default model
Independence model
HOELTER
.05
178
21
HOELTER
.01
209
24
updated 5/20/06
CMIN minimum value of the discrepancy between the model and the data.
This is the same as the chi-square statistic in the notes for model section.
CMIN/DF the chi-square divided by its degrees of freedom. Acceptable
values are in the 3/1 or 2/1 range. Using this criterion, our earlier model
(without the added path from PIQ to COMP) was also acceptable
(CMIN/DF = 1.65)
RMR, GFI The RMR [Root Mean Square Residual] is the square root of
the average squared amount by which your models estimated sample
variances and covariances differ from their actual values in the data. The
smaller the RMR the better, with RMR = 0 indicating a perfect fit. The GFI
[Goodness of Fit Index] is similar to the Baseline Comparisons described
below, giving a statistic between 0 and 1, with 1 indicating perfect fit, and is
used with maximum likelihood estimation for missing data. The AGFI
[Adjusted Goodness of Fit Index] takes into account the degrees of freedom
available for testing the model (this statistic can have values below zero). The
PGFI [Parsimony Goodness of Fit Index] is another modification of the GFI
that also takes into account the degrees of freedom available for testing the
model.
Baseline Comparisons NFI [Normed Fit Index] shows how far between the
(terribly fitting) independence model and the (perfectly fitting) saturated
model the detaulf model is. In this case, its 91% of the way to perfect fit. RFI
[Relative Fit Index] is the NFI standardized based on the df of the models,
with values close to 1 again indicating a very good fit. IFI [Incremental Fit
Index], TLI [Tucker-Lewis Coefficient], and CFI [Comparative Fit Index] are
similar. Note that TLI is usually between 0 and 1, but is not limited to that
range.
Parsimony-Adjusted Measures The PRATIO [Parsimony Ratio] is an
overall measure of how parsimonious the model is. It is defined as the df of
the current model divided by the df of the independence model. It can be
interpreted as the current model is X% as complex as the independence
model. The difference between this number and 1 is how much more
efficient your model is than the independence model. PRATIO is used to
calculate two other statistics: PNFI [Parsimonious Normed Fit Index] is
another modification of the NFI that takes into account the df (i.e.,
complexity) of the model. Similarly, the PCFI [Parsimonious Comparative Fit
Index] is a df-adjusted modification of the CFI. These two measures are likely
to be lower than the NFI and CFI, because they take model complexity into
account.
NCP the noncentrality parameter. The columns labeled LO 90 and HI
90 give the 90% confidence interval for this statistic. This statistic can also
be interpreted as a chi-square, with the same degrees of freedom as in CMIN.
FMIN F0 is the noncentrality parameter (NCP) divided by its degrees of
freedom. This is similar to the CMIN/DF statistic. The results also give the
lower and upper limits of a 90% confidence interval for this statistic (LO 90
and HI 90 under the FMIN heading).
updated 5/20/06
The remaining measures are intended for comparing multiple models, rather than
evaluating goodness of fit for a single model:
AIC
Model
Default model
Saturated model
Independence model
AIC
76.177
90.000
447.884
BCC
78.494
95.488
448.981
BIC
136.308
232.415
476.367
CAIC
155.308
277.415
485.367
ECVI
Model
Default model
Saturated model
Independence model
ECVI
.438
.517
2.574
LO 90
.368
.517
2.211
HI 90
.556
.517
2.979
MECVI
.451
.549
2.580
updated 5/20/06
BIC the Bayesian Information Criterion assigns an even greater penalty for
complexity, and therefore has a tendency to choose parsimonious models.
This can only be used in single-group models.
CAIC the Consistent AIC has a greater penalty for complexity than AIC or
BCC, but not as much as BIC does. Can only be used in single-group models.
ECVI Arbuckle reports that except for a constant scale factor, ECVI
[expected cross-validation index] is the same as AIC (it is equal to AIC / n).
Upper and lower 90% confidence interval limits are also given for ECVI.
MECVI is similar, equal to BCC / n. When maximum likelihood estimation
has been used to compensate for missing data, Arbuckle provides a
recommendation to use MECVI instead of ECVI.
23. Specification Search options An extra feature in AMOS 5.0 is the ability to do a
specification search. This feature allows you to test various models simultaneously,
by specifying that some relationships between variables are optional. For instance,
what if I wanted to test the different fit obtained by using a model with two correlated
factors of IQ versus two uncorrelated factors? I could use the Specification Search
command in the Model-Fit menu.
a. To see the optional relationships correctly, first go to the View-Set
menu and select Interface Properties:
updated 5/20/06
updated 5/20/06
e. Run the specification search using the arrow tool on the toolbar. An output
section will appear in the toolbar itself, showing results for each model:
f. You can interpret models using the chi-square and p-values provided, or
you can see more details. If you double click on one of the lines in this
window, or select it and then use the blue-box tool on the toolbar, you will
see the actual path diagram to know which of the optional components
have been included or omitted. In this case, Model 2 has a higher p-value
(which you want with SEM), and includes the VIQ-PIQ covariance.
UCDHSC Center for Nursing Research
Page 15 of 16
updated 5/20/06
g. If you got a lot of options, AMOS has a view short list button on the
toolbar to help you sift through them.
h. To see RMSEA and other statistics, use the checkbox icon on the toolbar,
and under the Current Results tab, check the box for Derived fit
indices in the Display section. Also, if you select Akaike weights in
the lower section of this menu, you can get a statistic in the BCC
column that can be interpreted as the likelihood of the model given the
data. This is a useful way to compare models. The model with the smallest
value here (or the lowest BIC value originally) is the best fit for the data.
i. You can also see comparisons between models graphically, using the
Plots button on the toolbar. Select Best Fit and AIC or BIC.
BCC is also possible, but imposes a greater penalty for complexity.
This graph shows you fit (on the y axis) vs. complexity (on the x axis), wth
the lowest y value showing you the best fit vs. complexity ratio. Click on
the individual data point, and AMOS will tell you which model number
was selected. Click on that model in the output window to see the model.
updated 5/20/06