Professional Documents
Culture Documents
About me
Academic qualifications
1999: Bachelor of Engineering
(Electronics), Assumption University
2000: Master of Engineering
(Telecommunication), RMIT University
2008: Doctor of Philosophy (Business
Information Systems), RMIT University
2009: Graduate Certificate in Tertiary
Teaching and Learning, RMIT University
2
About me
Professional qualifications
1999: Associate Electrical Communication Engineer,
Council of Engineers, Thailand
20092010: Certificates of Completion, Australian
Consortium for Social and Political Research Incorporated
Practical measurement and multilevel analysis in the
psychosocial sciences
Applied structural equation modelling
Analysing categorical and continuous latent variables using
Mplus
Scale development, Rasch analysis and item response theory
Introduction to social network research and network analysis
About me
Professional qualifications
20102012: Certificates of Completion, statistics.com
Practical Rasch measurementcore topics
Rasch applications in clinical assessment, survey research,
and educational measurement
Practical Rasch measurementfurther topics
Introduction to Bayesian statistics
Meta analysis
Spatial statistics with geographic information systems
Forecasting time series
Introduction to Bayesian computing
Bayesian regression modeling via MCMC techniques
Bayesian hierarchical and multi-level modeling
4
About me
Research projects
1998: A fluorescent dimmer with an PPM remote
controller
2000: A sample stock exchangeWAP application
2001: Fostering consumer trust and purchase
intention in B2C e-commerce
2007:
Use of partial least squares regression and structural
equation modeling in information system research
Evaluating the impact of information and
communications technology on the livelihood of rural
communities
5
About me
Research projects
2008:
Use of e-government services by the urban community
Green information technology
Cross-country study to foster consumer trust in B2C e-commerce
2009: E-business assimilation and its effects on the growth and export
performance of Australian horticulture firms
2010:
Piloting a hybrid work integrated learning model to enhance dual hub student
collaborations in an international work context
Adoption of green IT best practices and their impact on the sustainability of data
centres
Design and usability evaluation of mental healthcare information systems
2011:
Classifying Australian PhD theses by research fields, courses and disciplines
(RFCD) codes
Green IT organizational learning (GITOL)
6
About me
Supervised projects
Research projects
Completed
Fuzzy multicriteria analysis and its applications for decision making under uncertainty
Current
Managing quality issues along a complex supply chain management in fire truck
bodybuilder business: A Thai case study
E-commerce adoption in marketing: Tourism in Saudi Arabia
Supply network complexity and organizational performance: Investigating the relationship
between structural embeddedness and organizational social outcomes using social
network analysis
Impact of agile manufacturing on Thailand automotive performance and competitive
advantage
Information security behaviour for non-work activities
Industrial projects
2011:
AIS student chapter portal
Botanical art website
Couse moderation system
About me
Current research interests:
Trust in business and management information system
E-commerce
Green information system
Expertise:
Focus group
Grounded theory
Measurement
Methodology
Statistical modelling
Survey
8
Overview
Morning session (9AM12PM)
Lunch (12PM1PM)
Afternoon session (1PM4PM)
Detailed overview
9AM
Causality
SEM vs regression
Path analysis (w/ activity)
10AM
GOF indices & model modification (w/
activity)
Direct/indirect/total effects (w/ activity)
Moderation effect (w/ activity)
10
Detailed overview
11AM
Power analysis (w/ activity)
Multiple-group analysis (w/ activity)
Invariance analysis (w/ activity)
Q&A
12PM
Lunch
1PM
Theory of measurement
EFA
One-factor congeneric model (w/ activity)
11
Detailed overview
2PM
Two-factor congeneric model (w/ activity)
Multi-factor congeneric model (w/ activity)
Reliability analysis
3PM
Reliability analysis (activity)
Structural model (w/ activity)
Q&A
12
Democritus (460370B.C.)
Causality
The relationship between one event
and another when the former is the
cause and the latter is the effect.
14
Characteristics of causal
relationships
Artificial isolation
First cause or infinite regress
Continuity of action
Ref: Bunge (2009)
15
IN
Hand-eye model
Controlling study?
Confounding
variable
Model
Cancelled out
by
randomisation
/random
sampling
Controlled
study
18
Testing causality
Experimental research is required in order to
test the existence of a relationship between
an independent variable and a dependent
variable while controlling external variables.
However, there are several reasons of why it
cannot be done:
Too expensive
Too time-consuming
Randomisation and manipulation is impossible or
unethical
Phenomenon is currently unobservable
19
21
What is SEM?
SEM is a framework purposed by Karl Gustav
Jreskog, James Ward Keesling, and David E.
Wiley in 1970s to integrate maximum
likelihood, a measurement model (i.e. factor
analysis), and a structural model (i.e. path
analysis). Bentler (1980) calls it the JKW
model. LISREL was the first software that
implemented this framework. Charles
Spearman is credited for factor analysis and
Sewall Wright for path analysis.
22
SEM software
EQS
LISREL
Mplus
Mx
nyx
R: lavaan, OpenMx, sem2
SAS: CALIS, TCALIS
SPSS: AMOS
Stata: GLLAMM, SEM
STATISTICA: SEPATH
23
Variables
There are 2 types of variables in SEM:
Manifest variable (observable) representing in a rectangular
Data
Composite variable
25
26
Moderation effect
A
C
27
SEM VS REGRESSION
28
Regression
Purpose: To test the relationship
between multiple independent
variables and a dependent
Type: variable.
nominal,
Type: interval
Samples:
independent
Distribution: normal
No multicollinearity
IV1
IV2
IV3
ordinal, interval,
ratio
DV
Residual
Variance :
homoscedasticity
Distribution: normal
29
SEM
Purpose: To test the structure and
measurement of the relationships
between multiple independent and
Type: nominal,
Measurement
dependentordinal,
variables
interval,
model
ratio
e11
r22
X11
IV11
e22
X22
e33
X33
IV33
Y11
e55
Y22
e66
IV22
e44
X44
Type: interval
r11
Structural
model
30
Comparison
Aspect
Regression
SEM
Aim
Maximise of a
dependent variable
Replicate a sample
variancecovariance
matrix
Model
Simple
Complex
Dependent
Variable
Single
Multiple
Control Variable
First block/variable
All
Multicollinearity
Not allowed
Allowed
Data Fitting
Just-identified ()
Over-identified ()
Interpretation
Interpretation
Change
Change in
in
observation
observation
One
One at
at a
a time
time
Change
Change from
from
manipulation
manipulation
Simultaneously
Simultaneously
Adjusted
All
All results
results
Equation
Equation
Generalisability
Generalisability
31
What is over-identification?
32
Benefits of overidentification
Statistical
power
Predictive
accuracy
Generalisabili
ty
Error
Sampledependent
Capitalisation
on chance
33
Familywise error
Test(s)
1
2
3
4
5
6
7
8
9
10
Error
5%
10%
14%
19%
23%
26%
30%
34%
37%
40%
Test(s)
11
12
13
14
15
16
17
18
19
20
Error
43%
46%
49%
51%
54%
56%
58%
60%
62%
64%
36
Benefits of SEM
Experiment
Non-experiment
Environment
Controlled
Uncontrolled
Causal assumptions
Not required
Required
Statistical
assumptions
Required
Required
What to be tested
It tests to what
extent an
independent variable
affects a dependent
variable under the
assumption that a
causal model is true.
39
Interpretation of
Regression
SEM
Ref: Pearl
40
What is ?
(epsilon) is an error variable. It
represents the influence of omitted
variables, which are background
variables outside the conceptual
model.
Note: go back to Descartes drawing
to review IN and OUT concepts.
41
Misunderstanding
Myth: Regression produces a
structural equation
Reality: Regression produces a
regression equation
Misunderstanding
Myth: Partial least squares (PLS) regression is
a branch of SEM
Reality: Regression solve one equation at a time
while SEM solve all equations simultaneously; as
a result, regression cannot be categorised as part
of SEM.
Technically, regression produces the same
estimate as that of SEM when there is no
confounding effect (e.g. correlation between
independent variables and/or errors). This point
has been mathematically proven by Pearl (1995).
43
Misunderstanding
Myth: PLS has a higher statistical
power than that of SEM when a
sample size is small.
Reality: Goodhue, Lewis, and Thompson
(2006) conducted a Monte Carlo study
and found that there is no difference in
statistical power between PLS and SEM.
44
PATH ANALYSIS
45
AMOS capabilities
AMOS
Note: AMOS and other SEM software
can permute missing values in your
data set. However, permuting data
restricts several features in an
analysis. As a result, you are
recommended to address missing
values in SPSS first.
47
AMOS
Step-by-step how to specify a
conceptual model on AMOS:
1. Open the data set
2. Draw and run your model
3. Evaluate your model
48
Step 1: Setting up
data
49
Step 2: Model
specification
3
50
Step 2: Model
specification (cont)
51
Step 2: Model
specification (cont)
52
Step 3: Model
evaluation
53
54
10-min Exercise:
Path analysis
55
Estimators
56
AMOS Results
Sample Moments: the sample variancecovariance
matrix
Estimates: the estimated results based on the
conceptual model. The results include:
Unstandardised estimates
Standardised estimates
Variances
Squared multiple correlations ( or coefficient of determination)
Estimated variancecovariance matrix (implied covariances)
Estimated correlation matrix (implied correlations)
Residual variancecovariance matrix (residual covariances)
Standardised residual variancecovariance matrix
(standardized residual covariances).
57
AMOS Results
Assessment of Normality: univariate and
multivariate normality tests
Observations farthest from the centroid:
Mahalanobis distance (d2)
p1: the probability of di2 to exceed the centroid
Large p1 value means that a case i is probably an outlier under
the assumption of multivariate normality.
58
Sample
Matrix
Model
Estimate
d Matrix
60
CMIN
CMIN is a minimum discrepancy function.
AMOS supports the following functions:
Maximum likelihood (ML)
Generalized least squares (GLS)
Unweighted least squares (ULS)
Scale-free least squares (SLS)
Asymptotically distribution-free (ADF)
t-rule
To
enable an identifiable model, you must
ensure that you do not have the number of
free parameters higher than data points. This
is a necessary but not sufficient condition for
model identification. It can be calculated with
the formula below (Bollen, 1989):
t = the number of free parameters (i.e.
parameters that we estimate in a model)
k = the number of observed variables
64
66
67
68
69
71
72
76
Index
SRMR
Value
Meaning
Reference
p 0.01
Yu (2002)
< 0.07
IFI
The over-identification
condition of the model is at
an acceptable level.
0.95
TLI
CFI
0.96
PCFI
0.85
BIC
Smallest
Pitt and
Myung
(2002)
78
Assumptions
Explore your data further whether any
assumption is extremely violated
Model
Misspecified model
Your model is wrong
Theory is wrong
79
Modification index
Modification index (MI), or Lagrange multiplier (LM), is a
measure that indicates how to fit the model better by
estimating a new parameter. It provides two pieces of
information:
2: This test tells you how much 2 would reduce when a
parameter is modified.
Par Change: This measure tells you how much a parameter
value will change when a parameter is modified.
A positive value means the current model underestimates a parameter.
A negative value means the current model overestimates a parameter.
80
81
Modified model
MI must be used with caution. Mindless model
modification leads to the following issues:
Theoretical nonsense
Overfitting
More error
Less statistical power
More capitalising on chance
Less predictive accuracy
More sample-dependent
Reduced generalisability
82
83
10-min Exercise:
Path analysis #2
84
Standardised estimates
Estimate
Variances
Estimate
S.E.
z-test
p-value (2-tailed)
(implied covariances)
Estimated correlation matrix (implied
correlations)
Residual variancecovariance matrix
(residual covariances):
Standardised residual variance
covariance matrix (standardized
residual covariances):
86
Direct effect
The direct effect of X on Y is the
increase that we expect to see in Y
by unit given a unit increase in X.
88
Indirect effect
The indirect effect of X on Y is the
increase that we expect to see in Y
by unit while leaving X untouched
and increasing Z to whatever value
that Z would attain under a unit
increase of
X.
89
Total effect
The total effect of X on Y is the
increase that we expect to see in Y
by yx+zx unit under a unit increase
of X.
Z
zx
yx
90
Interpretation of unstandardised
estimates
Based on the results, if we were to
change the value of EXPm from
expm to expm+1, the value of TRU2
is expected to change from tru2 to
tru2+0.166.
Basically, it means that the
difference of the mean values of trust
between inexperienced online
shoppers and experienced online
shoppers is 0.166.
91
Interpretation of standardised
estimates
on the results, if we were to change
Based
92
Use of estimates
Unstandardised
Standardised
To create a
mathematical
equation
To use as a priori
parameters
To simulate a model
Stable across samples
To calculate an effect
size
To communicate with
others
To compare with other
studies
Unstable across samples
since a parameter is
standardised using a
sample-specific standard
deviation
93
SEM procedure
Model
evaluation
Fit?
No
Model
rectification
Yes
Path
evaluation
94
Moderation effect
An
effect from one variable (e.g. z)
96
Residual centering
Lance (1988) proposed use of
residual centering to address
multicollinearity problems from
creating a moderating variable. This
is done by using variables, which are
used to create the moderator, to
predict the moderator.
97
Warning
Having only 2 variables does not produce
any evidence about causality. You need at
least 3 variables in a model. In fact, the
third variable may affect a relationship
between 2 variables.
Cum hoc ergo propter hoc (with this,
therefore because of this) is a logical
fallacy stating that correlation is causation.
Correlation is a necessary but not sufficient
condition to be causation.
98
POWER ANALYSIS
99
Importance of statistical
power
Significance
test provides a p-value which tells us the
Outco
Outco
me
me
H0 is not
rejected
()
H0 is
rejected
()
There is sufficient
There is sufficient
evidence to reject
evidence to reject
that the model is
that the model is
correct.
correct.
Insufficient power
()
Although there is not
Although evidence
there is not
sufficient
to
sufficient
evidence
to
reject that the model
reject
that the
model
is correct,
it may
is correct, it may
happen by chance.
happen by chance.
Although there is
Although there is
sufficient evidence to
sufficient
to
reject thatevidence
the model
reject
that the
model
is correct,
it may
is correct,
it may
happen
by chance.
happen by chance.
101
Close fit
H0: RMSEA=.06, Ha: RMSEA=.09
RMSEA CI
Close fit
.00
Exact fit
.03
.06
.09
103
(H0)
(Ha)
Correct Outcome
[1-]
False Negative
(Type II Error)
[]
False Positive
(Type I Error)
[]
Correct Outcome
[1-]
Outcome
104
106
Multiple-group analysis
SEM allows you to test a model
against multiple groups of samples
simultaneously. This technique is
useful when you
know/suspect/hypothesise that
samples are heterogeneous. This
allows us to control a Type I error.
107
Invariance analysis
SEM allows you to test a parameter
to be equal across groups of
samples. This technique is useful
when, for example, you hypothesise
that the effect of one variable on
another variable is constant across
groups of samples.
109
Slope (structural
parameters)
Residual
111
Theory of measurement
Measurement paradigm
A 20th century philosophy of measurement called
representationalism saw numbers, not as properties inherent in
an object, but as the result of relationships between
measurement operations and the object (Chrisman, 1995, p.
272).
Measurement of magnitudes is, in its most general sense, any
method by which a unique and reciprocal correspondence is
established between all or some of the magnitudes of a kind and
all or some of the numbers, integral, rational, or real, as the case
may be In this general sense, measurement demands some
oneone relation between the numbers and magnitudes in
questiona relation which may be direct or indirect, important or
trivial, according to circumstances (Russell, 1903, p. 176).
Different analyses require/support different levels of
measurement.
113
Measurement paradigm
Measurement paradigm
The
classical test theory model is the theory of
psychological testing that is most often used in
empirical applications. The central concept in
classical test theory is the true score. The true
scores are related to the observations through the
use of the expectation operator: the true score is
the expected value of the observed score.
(Borsboom, 2005, p. 3)
The true score of any person on an item () is the
expected value of the observed score (). The
difference between the observed score and the
true score is the error score ().
115
Measurement paradigm
The latent variable model has been proposed as an
Measurement paradigm
The representational measurement modelalso known
In research ...
118
Measurement assembly
Constru
ct
Ontological plane
-Theories
-Models
-Scales
-Law statements
Theoretical plane
Empirical plane
x1
x2
x3
x4
Instrumental plane
X1:
X2:
X3:
X4:
119
TO REPRESENT
120
Content validity
1. Construct specification
Domain
What is included
What is excluded
Facets (substrata)
Some researchers refer to facets as dimensions.
Content validity
3. Assessment method
4. Item generation
Deduction
Experience
Theory
Literature
Instrument
Content expert
Population
122
Content validity
5. Item alignment
Use table of construct to map against items
Generate multiple items/facet
Adjust the number of items relatively to the
importance of facet
6. Item examination
Suitability of items for a facet
Consistency, accuracy, specificity, and
clarity of wording and definitions
Remove redundant items
123
Content validity
7. Quantitative parameter
Response formats and scales
Time-stamping parameters
8. Instrumentation
Create instructions to match with
domain and function of assessment
instrument
Clarify and strive for specificity and
appropriate grammatical structure
124
Content validity
9. Stimuli creation (e.g. social
scenarios, audio and video
presentations)
10.Pre-test (with expert for steps 13
and 59)
11.Pilot test (with sample)
12.Item screening (using content
validation process by Lindell (2001),
Lindell, Brandt, & Whitney (1999),
and Lindell & Brandt(1999))
125
Levels of measurement
Level
Information required
Example
Nominal
Definitions of categories
Sex
Graded
membership
Socio-economic status
Ordinal
Rating scale
Interval
Degree Celsius
Log-interval
Richter magnitude
scale
Extensive ratio
Cyclic ratio
Angle
Derived ratio
Density, velocity
Counts
Number of employees
Absolute
Type
Probability,
proportion
126
TO EXPLAIN
127
Measurement analysis
128
Categorical
Continuous
Categorical
129
Types of indicators
Reflective
indicators
Formative
indicators
Reactive
indicators
X11
X22
X33
X44
e11
e22
e33
e44
X11
X22
X33
X44
X11
X22
X33
X44
e11
e22
e33
e44
Type of measurement
models
Essentially
-equivalent
Parallel
1 2 3 4
x1 x2 x3 x4
1 2 3 4
1 2 3 4
x1 x2 x3 x4
1 2 3 4
1
x1
2
x2
x3
x4
Congeneric
-equivalent
1 2 3 4
x1 x2 x3 x4
1 2 3 4
1 2 3 4
x1 x2 x3 x4
1 2 3 4
131
Type of measurement
models
Variable-length
=1
1
1
11
22
33
44
x1
x2
x3
x4
Measurement parameters
Mean is precision and difficulty.
Factor loading, or slope, is scale and
discrimination.
Error variance is error and unique
variance. Theoretically, it should be
random error. However, if there is
systematic error, 2+ errors will be
correlated.
133
F1
F2
X1
X2
X3
X4
e1
e2
e3
e4
independent
134
EFA use
To partial the measurement error out of
the observe scores
To explore underlying patterns in the data
To determine the number of latent
variables
To reduce the number of variables
To assess the reliability of each item
To eliminate multi-dimensional items (i.e.
cross-loaded variables)
135
Dimensionality
Multi-dimensional
construct
Unidimensional
factor
Multidimensional
factor
Multidimensional
item
Unidimensional
construct
Uni-dimensional
item
136
EFA processes
137
Extraction
When you assume data to be the
population:
Principal axis factoring assumes that
factors are hypothetical and that they
can be estimated from variables.
Image factoring assumes that factors
are real and that they can be estimated
from variables.
138
Extraction
When you assume data to be a sample randomly selected
from the population:
ULS attempts to minimise the sum of squared differences
between estimated and observed correlation matrices, excluding
the diagonal.
GLS attempts to minimise the sum of squared differences
between estimated and observed correlation matrices while
accounting uniqueness of variables (i.e. the more the
uniqueness of variables is, the less weight the variables have).
ML attempts to estimate parameters which are likely to produce
the observed correlation matrix. The estimated correlation
matrix is accounted for the uniqueness of variables.
Kaisers alpha factoring assumes that variables are randomly
sampled from a universe of variables. It attempts to maximise
reliability of factors.
139
Rotation
Orthogonal rotation assumes factors to be
uncorrelated from one another:
Quartimax maximises the sum of variances of
loadings in rows of the factor matrix. It attempts to
minimise the number of factors needed to explain each
variable. This method tends to make a number of
variables highly loaded on a single factor.
Varimax maximises the sum of variances of loadings
in columns of the factor matrix. It attempts to minimise
the number of variables highly loaded on each factor.
Equamax combines Quartimax and Varimax
approaches, but it is reported to behave erratically.
140
Rotation
Oblique rotation assumes factors to be
correlated with one another:
Direct oblimin allows you to adjust a degree
of correlation between factors. Delta value of
0 allows factors to be moderately correlated.
On the other hand, delta value of +0.8 allows
factors to be more correlated while delta
value of -0.8 allows factors to be less
correlated.
Promax is quicker than direct oblimin. It is
useful with a large sample.
141
F1
F2
F2
142
is
is
is
is
is
is
marvellous
meritorious
middling
mediocre
miserable
unacceptable
144
Reading Results
Factor Matrix shows the correlation between
items and factors (i.e. a factor loading) before
rotation is taken place.
Pattern Matrix shows the correlation between
items and factors after rotation is taken place.
Structure Matrix shows the correlation between
items and factors, accounted for relationships
between factors. Actually, it is the product of the
pattern matrix and the factor correlation matrix.
Factor Correlation Matrix shows the correlation
between factors.
148
File: rawpar
Parallel analysis
To determine the maximum number of
factors to be extracted by assessing
eigenvalues of the data against those of
the simulation. Factors to be extracted
must have eigenvalues higher than those
of the simulated ones.
You may use the SPSS script provided by
OConnor (2000) or the online engine
provided by Patil
, Singh, Mishra, and Donovan (2008) .
150
F1
F2
X1
X2
X3
X4
e1
e2
e3
e4
151
CFA
To
test a specific measurement model (i.e.
parallel, -equivalent, essentially -equivalent,
congeneric, and variable-length models)
To test a higher-order factor model
To test construct validity (i.e. convergent validity,
discriminant validity, and factorial validity)
To assess to what extent the measurement fits the
data
To perform multiple-group or invariance analysis
To prepare the measurement model for structural
equation modeling (SEM)
152
Measurement validation
Convergent validity
Discriminant validity
To test that 2 factors represent different things
Method: nested two-factor model (i.e. one model assumes
two factors to be the same thing and the other does not),
average variance extracted ( AKA average variance extract
(AVE))
Factorial validity
To test that all factors in the measurement fits the data
Method: multi-factor model
153
Convergent validity
154
Discriminant validity
155
CorrelationAVE comparison
1.
Calculate AVE using the formula
Ref: Fornell & Larcker (1981)
Factorial validity
157
Construct validity
Convergent
validity
Fit?
Discriminant
validity
Fit?
Diff?
Factorial validity
Fit?
Fix/free
parameter
Split factor
Drop item
Fix/free
parameter
Drop item
Combine
factors
Fix/free
parameter
Drop item
158
Correlated errors
Freeing a correlational parameter
between error terms may be a post
hoc practice to improve model fit.
However, it should be supported by a
theoretical explanation.
Gerbing (1984) explains that one
possibility to have a correlated errors
is due to multi-dimensionality.
159
TO DESCRIBE
160
161
Reliability
The extent that your measurement
can produce consistent results (i.e.
precision)
162
Groves (2004, p.
10)
163
Error
Consists of 2 components:
Bias is a constant error caused by research design.
Variance is a variable error caused by obtaining
data from different respondents, using different
interviewers, and asking different questions.
Error in observation
Observation error consists of 4
components:
Interviewer error is caused by ways of
administration made by interviewers.
Respondent error is caused by different
individuals give responses with a different
amount of error.
Instrument error is caused by design of
instruments.
Mode error is caused by using different modes
of enquiry.
165
Error in non-observation
Non-observation error consists of 3
components:
Coverage error is caused by failure to
include samples into a sampling frame.
Non-response error is caused by
respondents cannot be located or refuse to
respond.
Sampling error is caused by statistics
producing results based on a subset of the
population which may exhibits responses
differently from other subsets.
166
Error in non-response
Non-response may cause by:
Respondents lack motivation or time
Fear of being registered
Travelling
Unlisted, wrong, or changed contact details
Answering machine
Telephone number display
Illness or impairment
Language problems
Business staff, owner, or structure changes
Too difficult or boring
Business policy
Low priority
Survey is too costly, or lack of time or staff
Sensitive or bad questions
Error in statistics
There are 2 components:
Systematic error represents something that is
not captured in a model.
Random error, AKA measurement error,
represents something that uniquely causes
observed scores to deviate from true scores.
This type of error is supported by latent variable
models. It is a combination of error generated
by interviewers, respondents, and instruments.
When multitraitmultimethod (MTMM) is used,
mode error can also be incorporated.
168
Instrument error
Unstated criteria
Wrong: How important is it for stores to carry a
large variety of different brands of this product?
Right: How important is it to you that the store you
shop at carries a large variety of different brands?
Inapplicable questions
Wrong: How long does it take you to find a parking
place after you arrive at the plant?
Right: If you drive to work, how long does it take
you to find a parking place after you arrive at the
plant?
169
Instrument error
Example containment
Wrong: What small appliances, such as countertop
appliances, have you purchased in the past month?
Right: Aside from major appliances, what other
smaller appliances have you bought in the past
month?
Over-demanding recall
Wrong: How many times did you go out on a date
with your spouse before you were married?
Right: How many months were you dating your
spouse before you were married?
170
Instrument error
Over-generalisations
Wrong: When you buy fast food, what percentage of the
time do you order each of the following type of food?
Right: Of the last 10 times you bought fast food, how
many times did you eat each type of food?
Over-specificity
Wrong: When you visited the museum, how many times
did you read the plaques that explain what the exhibit
contained?
Right: When you visited the museum, how often did you
read the plaques that explain what the exhibit contained?
Would you say always, often, sometimes, rarely, or never?
171
Instrument error
Over-emphasis
Wrong: Would you favour increasing taxes to
cope with the current fiscal crisis?
Right: Would you favour increasing taxes to
cope with the current fiscal problem?
Ambiguity of wording
Wrong: About what time do you ordinarily
eat dinner?
Right: About what time do you ordinarily
dine in the evening?
172
Instrument error
Double-barrelled questions
Wrong: Do you regularly take vitamins to avoid getting
sick?
Right: Do you regularly take vitamins? Why or why
not?
Leading questions
Wrong: Dont you see some danger in the new policy?
Right: Do you see any danger in the new policy?
Loaded questions
Wrong: Do you advocate a lower speed limit to save
human lives?
Right: Does traffic safely require a lower speed limit?
173
Respondent error
Social desirability
Response based on what is perceived as being socially
acceptable or respectable.
Acquiescence
Response based on respondents perception of what
would be desirable to the sponsor.
Prestige
Response intended to enhance the image of the
respondent in the eyes of others.
174
Respondent error
Threat
Response influenced by anxiety or fear
instilled by the nature of the question.
Hostility
Response arising from feelings of anger or
resentment engendered by the response task.
Auspices
Response dictated by the image or opinion of
the sponsor rather than the actual question.
175
Respondent error
Mental set
Cognitions or perceptions based on previous
items influence response to later ones.
Order
The sequence in which a series is listed
affects the responses to the items.
Extremity
Clarity of extremes and ambiguity of midrange options encourage extreme
responses.
176
Type
Error reduction
Method
Coverage error
Non-response error
Sampling error
Interviewer,
respondent, and
instrument error
Mode error
177
Internal consistency
Internal consistency
Internal
consistency assumes factor loadings
180
Construct reliability
When
a model is not essentially
182
Up to this point
Before conducting SEM, you must
ensure that:
All factors hold construct validity,
reliability, and uni-dimensionality.
All items hold uni-dimensionality.
Preferably, each factor has local
independence (i.e. absence of
correlation between errors).
184
186
SEM method
187
SEM method
1. Conceptualisation: This is done during literature review
or following a subsequent study. Theories are used to
explain a phenomenon.
2. Instrumentation: Concepts are linked into manifest
variables to create an instrument and a procedure for
data collection.
3. Identification: A model must be guaranteed that it will be
identified. If it is not, then an additional variable must be
included.
4. Mensuration: Data are collected while minimising
measurement error.
5. Preparation: Data are collated, cleaned, and structured
in a format supported by software.
188
SEM methodology
6. Specification: A model is specified properly in
software. The difficulty level depends on which
software and analysis are used.
7. Estimation: An appropriate estimator is chosen and
used. Different estimators have different statistical
assumptions and require different sample sizes.
Estimators sometimes are limited by types of
analysis.
8. Evaluation: A model is evaluated at both model
and parameter levels that it fits data. It is often
done in the following procedures: EFA, CFA, and
SEM.
189
SEM methodology
9.
In practice, a model often does not fits
Rectification:
SEM methodology
11.Selection: After original and
alternative models have been fitted,
one model must be selected to
represent a phenomenon.
12.Explanation: A fitted, and selected,
model must be explained. This also
includes correlation between error
terms.
191
References
Alreck, P. L., & Settle, R. B. (2003). The survey research handbook (3rd ed.). New York, NY, USA: McGrawHill/Irwin.
Bentler, P. M. (1980). Multivariate analysis with latent variables: Causal modeling. Annual Review of Psychology,
31(1), 419456.
Biemer, P. P., & Lyberg, L. E. (2003). Introduction to survey quality. Hoboken, NJ, USA: John Wiley & Sons, Inc.
Bollen, K. A. (1989). Structural equations with latent variables. New York, NY, USA: Wiley.
Borsboom, D. (2005). Measuring the mind: Conceptual issues in contemporary psychometrics. Cambridge, UK:
Cambridge University Press.
Bunge, M. (2009). Causality and modern science (4th ed.). New Brunswick, NJ, USA: Transaction Publishers.
Chrisman, N. R. (1995). Beyond Stevens: A revised approach to measurement for geographic information. In 13th
International Symposium on Computer-Assisted Cartography (pp. 271280). Presented at the 13 th International
Symposium on Computer-Assisted Cartography, Charlotte, NC, USA.
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and
measurement error. Journal of Marketing Research, 18, 3950.
Gerbing, D. W., & Anderson, J. C. (1984). On the meaning of within-factor correlated measurement errors. The
Journal of Consumer Research, 11(1), 572580.
Goodhue, D., Lewis, W., & Thompson, R. (2006). PLS, small sample size, and statistical power in MIS research.
Presented at the Hawaii International Conference on System Sciences, IEEE.
Graham, J. M. (2006). Congeneric and (essentially) tau-equivalent estimates of score reliability: What they are
and how to use them. Educational and Psychological Measurement, 66(6), 930944.
Groves, R. M. (2004). Survey errors and survey costs. Hoboken, NJ, USA: John Wiley & Sons, Inc.
Hancock, G.R. and R.O. Mueller (2001). Rethinking Construct Reliability Within Latent Variable Systems. In
Cudeck, R., S. Du Toit, and D. Sbom (eds.). Structural Equation Modeling: Present and Future, Lincolnwood:
Scientific Software International, Inc., pp. 195216.
192
References
Hayduk, L. A., Robinson, H. P., Cummings, G. G., Boadu, K., Verbeek, E. L., & Perks, T. A. (2007). The weird world, and
equally weird measurement models: Reactive indicators and the validity revolution. Structural Equation Modeling,
14(2), 280310.
Haynes, S. N., Richard, D. C., & Kubany, E. S. (1995). Content validity in psychological assessment: a functional
approach to concepts and methods. Psychological Assessment, 7(3), 238247.
Jaccard, J., & Jacoby, J. (2009). Theory construction and model-building skills: A practical guide for social scientists.
New York, NY, USA: The Guilford Press.
Jreskog, K. G. (1978). Structural analysis of covariance and correlation matrices. Psychometrika, 43(4), 443477.
Lance, C. E. (1988). Residual centering, exploratory and confirmatory moderator analysis, and decomposition of
effects in path models containing interactions. Applied Psychological Measurement, 12(2), 163175.
Lindell, M. K. (2001). Assessing and testing interrater agreement on a single target using multi-item rating scales.
Applied Psychological Measurement, 25(1), 8999.
Lindell, M. K., & Brandt, C. J. (1999). Assessing interrater agreement on the job relevance of a test: A comparison of
CVI, T, rWG(J), and r*WG(J) indexes. Journal of Applied Psychology, 84(4), 640.
Lindell, M. K., Brandt, C. J., & Whitney, D. J. (1999). A revised index of interrater agreement for multi-item ratings of a
single target. Applied Psychological Measurement, 23(2), 127135.
MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for
covariance structure modeling. Psychological Methods, 1(2), 130149.
Molla, A., Cooper, V., & Pittayachawan, S. (2011). The green IT readiness (g-readiness) of organizations: An
exploratory analysis of a construct and instrument. Communications of the Association for Information Systems,
29(1), 6796.
Mulaik, S. A. (1998). Parsimony and model evaluation. The Journal of Experimental Education, 66(3), 266273.
Munck, I. M. E. (1979). Model building in comparative education: Applications of the LISREL method to cross-national
survey data. International Association for the Evaluation of Educational Achievement Monograph Series No. 10.
Stockholm: Almqvist & Wiksell.
193
References
Novick, M. R., & Lewis, C. (1967). Coefficient alpha and the reliability of composite measurements.
Psychometrika, 32(1), 113.
Oconnor, B. P. (2000). SPSS and SAS programs for determining the number of components using
parallel analysis and Velicers MAP test. Behavior Research Methods, Instruments, & Computers,
32(3), 396402. Springer.
Patil, V. H., Singh, S. N., Mishra, S., & Todd Donavan, D. (2008). Efficient theory development and
factor retention criteria: Abandon the eigenvalue greater than one criterion. Journal of Business
Research, 61(2), 162170.
Pearl, J. (1995). Causal diagrams for empirical research. Biometrika, 82(4), 669688.
Pearl, J. (1998). Graphs, causality, and structural equation models. Sociological Methods & Research,
27(2), 226284.
Pitt, M. A., & Myung, I. J. (2002). When a good fit can be bad. TRENDS in Cognitive Sciences, 6(10),
421425.
Russell, B. (1903). Principles of mathematics. Cambridge, UK: Cambridge University Press.
Schmidt, F. L., & Hunter, J. E. (1997). Eight common but false objections to the discontinuation of
significance testing in the analysis of research data. In L. L. Harlow, S. A. Mulaik, & J. H. Steiger
(Eds.), What if there were no significance tests? (pp. 3764). Mahwah, NJ, USA: Lawrence Erlbaum
Associates, Inc.
Wright, S. (1923). The theory of path coefficients: A reply to Niles's criticism. Genetics, 8(3), 239
255.
Yu, C.-Y. (2002). Evaluating cutoff criteria of model fit indices for latent variable models with binary
and continuous outcomes. PhD, University of California, Los Angeles, Los Angeles.
194