You are on page 1of 16

Structural Equation Modeling with AMOS 5.

0
1. Raw data should be saved in SPSS (.sav) format.
2. Open AMOS Graphics from the Start Menu.

3. Select New from the File Menu.


4. Select Data Files from the File Menu. Then click on the File Name button,
and use the browser window to locate your SPSS data file.

5. You can use the View Data button to check your data (it will appear in SPSS).
6. Click OK to go on.

UCDHSC Center for Nursing Research


Page 1 of 16

updated 5/20/06

7. Use the List Variables in Dataset button on the left-hand side to see the
variables in your dataset:

8. Drag and drop the observed variables that you want to include in your model onto
the gray workspace:

UCDHSC Center for Nursing Research


Page 2 of 16

updated 5/20/06

9. Use the oval button on the left to draw in any unobserved (latent) variables that
you want to include in your model. Double click on the new variable (the oval
you drew) in order to give it a name. Each variable must be named.

10. Use the arrow tool on the left to draw paths between variables. The path always
goes in the direction of latent variable (oval) to observed variable (rectangle).

UCDHSC Center for Nursing Research


Page 3 of 16

updated 5/20/06

11. Use the double-sided arrow to draw covariances:

12. Add error variances (residuals) for each observed variable. These are unique,
unobserved variables for each observed variable. This is an important step, and
AMOS will give you a warning message and be unable to proceed if you omit it.

UCDHSC Center for Nursing Research


Page 4 of 16

updated 5/20/06

13. Enter regression weights for each of the relationships between observed and
unobserved variables. (Unless you have reason to believe that some associations
will be stronger than others, use 1 for all regression weights. If you believe, e.g.,
that one association is 2x as strong as another, use 2 for the weight that is twice as
strong, and 1 for the other). Note that it is not necessary to specify regression
weights for every relationship. Specify them for each error term, and specify the
first regression weight for each unobserved to observed variable, as shown below:

UCDHSC Center for Nursing Research


Page 5 of 16

updated 5/20/06

14. In the View/Set menu, select Analysis Properties. Go to the Output tab.
Select Standardized Estimates.

15. Run the model using the calculate estimates button on the left-side control
panel:

AMOS stands for Analysis of MOment Structures. AMOS calculates estimates for
the parameters and fits the model based on mean and covariance structures, to make
these fit the data as closely as possible.
UCDHSC Center for Nursing Research
Page 6 of 16

updated 5/20/06

16. View the results by clicking on the View Output button. This will show you the
solution for the model. Select standardized or unstandardized estimates.

17. Test the model against other plausible models by clicking on the view text
button to see all statistical outputs from the procedure:

UCDHSC Center for Nursing Research


Page 7 of 16

updated 5/20/06

18. Notes for Model shows the chi-square statistic that tests for significant
deviations between the model and the data. In this case, the p-value is significant,
meaning that the model is not a good fit for the data (chi-square is used as a
badness of fit statistic in SEM).
Notes for Model (Default model)
Computation of degrees of freedom (Default model)

Number of distinct sample


moments:
Number of distinct parameters
to be estimated:
Degrees of freedom (55 - 21):

55
21
34

Result (Default model)

Minimum was achieved


Chi-square = 52.738
Degrees of freedom = 34
Probability level = .021
19. If adequate fit was not achieved, go back and change the model to improve it.
There are some theoretical reasons (from earlier exploratory factor analysis work)
to expect that the Coding subtest was not strongly related to the underlying
constructs of Verbal and Performance IQ. Therefore, this subtest was deleted
from the model.
20. In the analysis properties tab, you can select modification indices. This will
print suggestions in the output for how to improve your model. However, be
careful to take suggestions only if they make theoretical sense.
21. This time, the model fits much better (chi square = 38.1, p = .058). Interestingly,
the paths showing the strongest correlations with the underlying factors
(Vocabulary with VIQ and Block Design with PIQ) are the same as those
suggested in published results for this test and found with exploratory factor
analysis in other samples. Overall, the results are consistent with the predicted
two-factor model of intelligence.

UCDHSC Center for Nursing Research


Page 8 of 16

updated 5/20/06

22. The chi-square does not have to indicate a complete fit in order to accept the model
with a large sample, the chi-squares p-value is always likely to be small. The
following alternative fit indices are provided in the text output. AMOS automatically
compares your model to a saturated model (all variables correlated with all others)
and to an independence model (all variables uncorrelated with all others), to ensure
that your model is a better fit. The various fit indices are described below:
Model Fit Summary
CMIN

Model
Default model
Saturated model
Independence model

NPAR
19
45
9

CMIN
38.177
.000
429.884

DF
26
0
36

P
.058

CMIN/DF
1.468

.000

11.941

RMR, GFI

Model
Default model
Saturated model

RMR
.433
.000

GFI
.952
1.000

UCDHSC Center for Nursing Research


Page 9 of 16

AGFI
.918

PGFI
.550

updated 5/20/06

Model
Independence model

RMR
2.521

GFI
.532

AGFI
.415

PGFI
.425

NFI
Delta1
.911
1.000
.000

RFI
rho1
.877

IFI
Delta2
.970
1.000
.000

TLI
rho2
.957

Baseline Comparisons

Model
Default model
Saturated model
Independence model

.000

.000

CFI
.969
1.000
.000

Parsimony-Adjusted Measures

Model
Default model
Saturated model
Independence model

PRATIO
.722
.000
1.000

PNFI
.658
.000
.000

PCFI
.700
.000
.000

NCP
12.177
.000
393.884

LO 90
.000
.000
330.800

NCP

Model
Default model
Saturated model
Independence model

HI 90
32.787
.000
464.416

FMIN

Model
Default model
Saturated model
Independence model

FMIN
.219
.000
2.471

F0
.070
.000
2.264

LO 90
.000
.000
1.901

HI 90
.188
.000
2.669

RMSEA

Model
Default model
Independence model

RMSEA
.052
.251

LO 90
.000
.230

HI 90
.085
.272

PCLOSE
.432
.000

HOELTER

Model
Default model
Independence model

HOELTER
.05
178
21

HOELTER
.01
209
24

UCDHSC Center for Nursing Research


Page 10 of 16

updated 5/20/06

CMIN minimum value of the discrepancy between the model and the data.
This is the same as the chi-square statistic in the notes for model section.
CMIN/DF the chi-square divided by its degrees of freedom. Acceptable
values are in the 3/1 or 2/1 range. Using this criterion, our earlier model
(without the added path from PIQ to COMP) was also acceptable
(CMIN/DF = 1.65)
RMR, GFI The RMR [Root Mean Square Residual] is the square root of
the average squared amount by which your models estimated sample
variances and covariances differ from their actual values in the data. The
smaller the RMR the better, with RMR = 0 indicating a perfect fit. The GFI
[Goodness of Fit Index] is similar to the Baseline Comparisons described
below, giving a statistic between 0 and 1, with 1 indicating perfect fit, and is
used with maximum likelihood estimation for missing data. The AGFI
[Adjusted Goodness of Fit Index] takes into account the degrees of freedom
available for testing the model (this statistic can have values below zero). The
PGFI [Parsimony Goodness of Fit Index] is another modification of the GFI
that also takes into account the degrees of freedom available for testing the
model.
Baseline Comparisons NFI [Normed Fit Index] shows how far between the
(terribly fitting) independence model and the (perfectly fitting) saturated
model the detaulf model is. In this case, its 91% of the way to perfect fit. RFI
[Relative Fit Index] is the NFI standardized based on the df of the models,
with values close to 1 again indicating a very good fit. IFI [Incremental Fit
Index], TLI [Tucker-Lewis Coefficient], and CFI [Comparative Fit Index] are
similar. Note that TLI is usually between 0 and 1, but is not limited to that
range.
Parsimony-Adjusted Measures The PRATIO [Parsimony Ratio] is an
overall measure of how parsimonious the model is. It is defined as the df of
the current model divided by the df of the independence model. It can be
interpreted as the current model is X% as complex as the independence
model. The difference between this number and 1 is how much more
efficient your model is than the independence model. PRATIO is used to
calculate two other statistics: PNFI [Parsimonious Normed Fit Index] is
another modification of the NFI that takes into account the df (i.e.,
complexity) of the model. Similarly, the PCFI [Parsimonious Comparative Fit
Index] is a df-adjusted modification of the CFI. These two measures are likely
to be lower than the NFI and CFI, because they take model complexity into
account.
NCP the noncentrality parameter. The columns labeled LO 90 and HI
90 give the 90% confidence interval for this statistic. This statistic can also
be interpreted as a chi-square, with the same degrees of freedom as in CMIN.
FMIN F0 is the noncentrality parameter (NCP) divided by its degrees of
freedom. This is similar to the CMIN/DF statistic. The results also give the
lower and upper limits of a 90% confidence interval for this statistic (LO 90
and HI 90 under the FMIN heading).

UCDHSC Center for Nursing Research


Page 11 of 16

updated 5/20/06

RMSEA F0 tends to favor more complex models. RMSEA is a corrected


statistic that gives a penalty for model complexity, calculated as the square
root of F0 divided by DF (RMSEA stands for root mean squared error of
approximation). Again, upper and lower bounds of a 90% confidence interval
are given. RMSEA values of .05 or less are good fit, <.1 to >.05 are moderate,
and .1 or greater are unacceptable. RMSEA = .00 indicates perfect fit. The
PCLOSE statistic that goes with this result is the probability of a hypothesis
test that the population RMSEA is no greater than .05 (so, you want this result
to be nonsignificant [p > .05], because you do not want to prove that the
RMSEA is significantly greater than .05).
HOELTER Hoelters critical N is the largest sample size for which one
would accept the hypothesis that a model is correct (in other words, the
sample size above which the chi-square goodness of fit test would go from
nonsignificant to significant). AMOS reports a critical N for significance
levels of .05 (which was used by Hoelter) and .01. The result can be
interpreted as the model would be rejected at the [.05/.01] level with a
sample size of greater than X. Hoelter suggests that models which would be
rejected only with 200 or more participants (a number of 200 or higher in the
Hoelter section of the output) are an adequate fit for the data. Numbers
smaller than 200 suggest an inadequate fit. Arbuckle disagrees with the use of
this criterion, and its not one of the more commonly reported statistics for
SEM, but other experts may use it.

The remaining measures are intended for comparing multiple models, rather than
evaluating goodness of fit for a single model:
AIC

Model
Default model
Saturated model
Independence model

AIC
76.177
90.000
447.884

BCC
78.494
95.488
448.981

BIC
136.308
232.415
476.367

CAIC
155.308
277.415
485.367

ECVI

Model
Default model
Saturated model
Independence model

ECVI
.438
.517
2.574

LO 90
.368
.517
2.211

HI 90
.556
.517
2.979

MECVI
.451
.549
2.580

AIC the Akaike Information Criterion is calculated as the discrepancy (C) +


2q (a complexity statistic).
BCC the Browne-Cudeck Criterion imposes a slightly greater penalty for
model complexity than the AIC does. Arbuckle (author of the AMOS
program) recommends this particular fit index.

UCDHSC Center for Nursing Research


Page 12 of 16

updated 5/20/06

BIC the Bayesian Information Criterion assigns an even greater penalty for
complexity, and therefore has a tendency to choose parsimonious models.
This can only be used in single-group models.
CAIC the Consistent AIC has a greater penalty for complexity than AIC or
BCC, but not as much as BIC does. Can only be used in single-group models.
ECVI Arbuckle reports that except for a constant scale factor, ECVI
[expected cross-validation index] is the same as AIC (it is equal to AIC / n).
Upper and lower 90% confidence interval limits are also given for ECVI.
MECVI is similar, equal to BCC / n. When maximum likelihood estimation
has been used to compensate for missing data, Arbuckle provides a
recommendation to use MECVI instead of ECVI.

23. Specification Search options An extra feature in AMOS 5.0 is the ability to do a
specification search. This feature allows you to test various models simultaneously,
by specifying that some relationships between variables are optional. For instance,
what if I wanted to test the different fit obtained by using a model with two correlated
factors of IQ versus two uncorrelated factors? I could use the Specification Search
command in the Model-Fit menu.
a. To see the optional relationships correctly, first go to the View-Set
menu and select Interface Properties:

UCDHSC Center for Nursing Research


Page 13 of 16

updated 5/20/06

b. On the Accessibility tab, check the alternative to color checkbox. This


will cause optional relationships between variables to show up as dashed
lines on the path diagram (rather than just showing up in blue, which is
harder to see):

c. Open the Specification Search toolbar using the binoculars tool:

UCDHSC Center for Nursing Research


Page 14 of 16

updated 5/20/06

d. Use the dashed-arrow tool to make some relationships between variables


optional components of the model. Just click on the tool, then click on the
relationship to be changed. You can reverse this action if necessary by
using the solid-line tool thats just to the right on the toolbar.

e. Run the specification search using the arrow tool on the toolbar. An output
section will appear in the toolbar itself, showing results for each model:

f. You can interpret models using the chi-square and p-values provided, or
you can see more details. If you double click on one of the lines in this
window, or select it and then use the blue-box tool on the toolbar, you will
see the actual path diagram to know which of the optional components
have been included or omitted. In this case, Model 2 has a higher p-value
(which you want with SEM), and includes the VIQ-PIQ covariance.
UCDHSC Center for Nursing Research
Page 15 of 16

updated 5/20/06

g. If you got a lot of options, AMOS has a view short list button on the
toolbar to help you sift through them.
h. To see RMSEA and other statistics, use the checkbox icon on the toolbar,
and under the Current Results tab, check the box for Derived fit
indices in the Display section. Also, if you select Akaike weights in
the lower section of this menu, you can get a statistic in the BCC
column that can be interpreted as the likelihood of the model given the
data. This is a useful way to compare models. The model with the smallest
value here (or the lowest BIC value originally) is the best fit for the data.

i. You can also see comparisons between models graphically, using the
Plots button on the toolbar. Select Best Fit and AIC or BIC.
BCC is also possible, but imposes a greater penalty for complexity.
This graph shows you fit (on the y axis) vs. complexity (on the x axis), wth
the lowest y value showing you the best fit vs. complexity ratio. Click on
the individual data point, and AMOS will tell you which model number
was selected. Click on that model in the output window to see the model.

UCDHSC Center for Nursing Research


Page 16 of 16

updated 5/20/06

You might also like