Professional Documents
Culture Documents
Ellen M. Hufnagel
ABSTRACT
Although the user satisfaction survey has
been widely used to evaluate system effectiveness,
the subjective aspect of this approach has led
s o m e r e s e a r c h e r s t o q u e s t i o n its u s e f u l n e s s .
Drawing u p o n attribution t h e o r y , this study
examines the effects of performance outcomes on
users' judgments about the information system at
the conclusion of a computer-based business g a m e .
Results indicate that those users w h o successfully
performed the task attributed their performance
outcomes to their own effort and understanding,
while those w h o w e r e unsuccessful tended to blame
their poor performance on luck and/or the quality
of the system.
The relationship between user
expectations and actual outcomes was also linked
to performance attributions.
T h e patterns of causal reasoning observed
here raise serious questions about the validity of
employing user satisfaction ratings as measures of
system effectiveness and may lead to a search for
more meaningful methods of evaluating information
systems.
But
the
subjective
aspect
of
user
satisfaction surveys has led some researchers to
question our rather wholesale acceptance of these
evaluation tools. As Davis and Srinivasan [ 2 0 ]
point out, the user satisfaction approach
ultimately hinges on three assumptions:
First, the perception of the user with
regard to the system being used is an
accurate indicator of system effectiveness.
Second, perceptions of several users of a
system can be aggregated to arrive at an
overall assessment of the system under
study.
Third, user satisfaction with a
system can be accurately measured (p. 91).
Most evaluation studies that have employed user
satisfaction as a measure of system effectiveness
have paid little or no attention to these
underlying assumptions.
Furthermore, very few
studies have any real theoretical basis to support
hypotheses regarding user attitudes [19]. In the
absence of a strong theoretical foundation, it is
not surprising that user attitude research to date
has
produced
inconsistent
and
sometimes
conflicting results.
431
THE SATISFACTION-PERFORMANCERELATIONSHIP
438
Success
Attributions
Internal
Failure
External
Attributions
Performance
Locus of Causality
Stability
of Causes
STABLE
UNSTABLE
INTERNAL
Ability
EXTERNAL
Di;af%lty
Effort
I1
Figure 3 .
Luck
These
investigations
have
typically
demonstrated that, where congruence exists between
an individuals expectations and the actual
outcome, attributions are more frequently made to
stable factors such as ability or task difficulty
[12,13,16]. This is not surprising given that an
individuals expectations about task performance
are probably based on his estimate of his own
abilities as well as his perceptions of task
difficulty. On the other hand, incongruence
between expectations and achieved outcomes leads
to more frequent attributions to unstable causes
such as luck and effort, perhaps because the other
factors contributing to performance (ability and
task difficulty) are relatively stable, at least
in the short run.
The relationship between
performance expectations, outcomes and causal
attributions is depicted in Figure 4 .
Congruence
Stable
Attributions
Performance
Unstable
I n c o n x d Attributions
Figure
4.
439
Individuals
who
are unsuccessful in
performing a computer-based task will make
stronger external attributions to the quality
of the computer system than will those who
are successful.
H2b: Individuals
who
experience congruence
between their expectations and outcomes (i.e.
expect to do well and do, or expect to do
poorly
and
do)
will
make
stronger
attributions to the computer system than will
those who experience incongruence between
expectations and actual outcomes.
In
addition
to
these
attributional
hypotheses, which are the primary focus of this
study, a third hypothesis attempts to link
attribution theory to user satisfaction research
by examining the relationship between performance
outcomes (success/failure) and the performers'
subjective assessments of the computer system:
H3: Individuals who perform well on a computerbased task will view the system as more
valuable than will those who perform poorly,
regardless of the quality of the information
system.
A positive finding with respect to Hypothesis 3
would seem to suggest that user satisfaction
ratings may in fact be quite subjective and, in
some instances, may be unrelated to any objective
measure of system quality.
A STUDY OF USER PERFORMANCE ATTRIBUTIONS
Experimental Procedures
---
w.
441
Data Analysis
Performance Outcomes and Causal Attributions.
The first research hypothesis (Hla) predicted that
participants who failed would make stronger
performance attributions to external factors than
would those who succeeded.
For purposes of
hypothesis testing, respondents were divided into
two groups, SUCCESS and FAILURE, based on the
actual payoffs they earned.
Those who received
the $5 minimum wage (n-51) were considered to have
failed, while those who received $5 plus a
performance bonus (n=29) were considered to have
succeeded.
--- - - - - - - _ _ _ _ _ _ _ _ _ _ _ _ _ _
MEAN SCORES
INTERNAL FACTORS
Effort
Understanding
Success
Failure
4.90
4.69
3.70
3.90
15.74
3.92
.0002
.OS13
3.69
3.62
4.07
4.33
3.14
4.82
5.45
2.50
3.64
.0221
.1100
.0599
EXTERNAL FACTORS
Model Quality
Difficulty
Luck
442
'
Discussion
Consistent with the first research hypothesis
and with previous attribution research, the causal
factors noted by both successful and unsuccessful
performers tended to be hedonically biased.
In
general, the successful performers made stronger
attributions to internal factors (their own effort
and understanding),
while the unsuccessful
performers made stronger attributions to external
causes (the quality of the computer system and
luck). Task difficulty was the only causal factor
that failed to produce the anticipated outcomeneither of the
dependent attributions
performance groups viewed the difficulty of the
task as contributing significantly to their
outcomes.
443
REFERENCES
The fact that users in this experiment tended
to discount the contribution of the computer
system when things went well and to blame the
system when things went poorly seems to suggest
that user satisfaction may be a less than adequate
surrogate for system effectiveness when the actual
contribution of the system is ambiguous or
difficult to quantify from the user's perspective.
For example, it may be particularly troublesome as
a surrogate for effectiveness when users are
inexperienced at performing the task in question,
do not have a good understanding of how the system
a c t m l l y works, or are otherwise unable to judge
the impact system use has had on their outcomes.
Expert systems and some types of decision support
systems that are specifically designed to aid
novice users may be especially vulnerable to the
problem of self-serving causal attributions if
users expect that the answers provided by the
system will necessarily be "right. "
444
161
[I91 Goodhue, D.
(1986). "IS attitudes: Toward
theoretical
and
definition
clarity,"
proceedings of the Seventh International
Conference on Information Systems, San Diego,
CA: 181-194.
[21] Harvey, J .
and
Weary, G .
(1981).
Perspectives on Attributional Processes,
(Dubuque. IA: Wm. C. Brown).
[22] Heider, F. (1944).
"Social perception and
phenomenal causality," Psycholonical Review,
51: 358-384.
Psycholo
of
I231 Heider, F.
(1958). The
Interpersonal Relations. (New York: WigleyC
[lo] Davis, J.
and
Srinivasan, A. (1988)
"Incorporating
user
diversity
into
information systems assessment," in N. BjornAndersen and G . Davis (eds), Information
Systems Assessment, (Amsterdam: NorthHolland): 83-98.
[271 Kelley, H.
and
Michela, J.
(1980).
"Attribution theory and research," in M.R.
Rosenzweig and L.M. Porter (eds) Annual
Review of Psvcholonr, (Palo Alto, CA: Annual
Review, Inc.): 457-501.
'I
[30] Miller, D.
(1976).
"Ego involvement and
attributions for success and failure,"
Journal of Personality and Social Psychology,
34: 901-906.
[181 Ginzberg, M.
(1981).
"Early diagnosis of
MIS implementation failure: Promising results
and unanswered
questions," Management
Science, 27, 4.
445
."
of
446