You are on page 1of 10

USER SATISFACTION

ARE W E REALLY MEASURING SYSTEM EFFECTIVENESS?

Ellen M. Hufnagel

Department of Information Systems and Decision Sciences


University of South Florida
success of a system in terms of users' feelings are they satisfied or dissatisfied with the system
as it was delivered?
Proponents of the user
satisfaction approach argue that the subjective
assessments that result provide "a meaningful
'surrogate' for the critical but unmeasurable
result of an information system, namely, changes
in organizational effectiveness" [ 2 5 ] .

ABSTRACT
Although the user satisfaction survey has
been widely used to evaluate system effectiveness,
the subjective aspect of this approach has led
s o m e r e s e a r c h e r s t o q u e s t i o n its u s e f u l n e s s .
Drawing u p o n attribution t h e o r y , this study
examines the effects of performance outcomes on
users' judgments about the information system at
the conclusion of a computer-based business g a m e .
Results indicate that those users w h o successfully
performed the task attributed their performance
outcomes to their own effort and understanding,
while those w h o w e r e unsuccessful tended to blame
their poor performance on luck and/or the quality
of the system.
The relationship between user
expectations and actual outcomes was also linked
to performance attributions.
T h e patterns of causal reasoning observed
here raise serious questions about the validity of
employing user satisfaction ratings as measures of
system effectiveness and may lead to a search for
more meaningful methods of evaluating information
systems.

But
the
subjective
aspect
of
user
satisfaction surveys has led some researchers to
question our rather wholesale acceptance of these
evaluation tools. As Davis and Srinivasan [ 2 0 ]
point out, the user satisfaction approach
ultimately hinges on three assumptions:
First, the perception of the user with
regard to the system being used is an
accurate indicator of system effectiveness.
Second, perceptions of several users of a
system can be aggregated to arrive at an
overall assessment of the system under
study.
Third, user satisfaction with a
system can be accurately measured (p. 91).
Most evaluation studies that have employed user
satisfaction as a measure of system effectiveness
have paid little or no attention to these
underlying assumptions.
Furthermore, very few
studies have any real theoretical basis to support
hypotheses regarding user attitudes [19]. In the
absence of a strong theoretical foundation, it is
not surprising that user attitude research to date
has
produced
inconsistent
and
sometimes
conflicting results.

There is general agreement in the field of


information systems regarding the need to evaluate
computer systems that support decision making in
order to determine whether or not they are
functioning as intended [ 2 4 . 3 7 ] . To this end,
reasonably sound techniques have been developed
for assessing the efficiency and reliability of
these systems. Still problematic, however, is the
task of evaluating effectiveness, or the extent to
which a given information system actually
contributes to the achievement 'of organizational
objectives

The purpose of this paper is to begin


examining the first of the three assumptions
explicated by Davis and Srinivasan, namely, that
user perceptions regarding an IS provide an
accurate measure of system quality. (Note that
the second assumption is only valid if the first
assumption is valid).
The study described here
draws upon an extensive body of theoretical and
empirical research on causal attributions to
suggest an alternative explanation for users'
affective responses to an information system.
Although the results of this project must be
viewed as preliminary and exploratory, they
suggest that, in some cases, user satisfaction
ratings may be a reflection of individual
performance outcomes (i.e. success or failure),
rather than an objective assessment of system
quality .

One of the most widely used methods of


evaluating system effectiveness is the users
satisfaction survey.
Although a number of
different user satisfaction questionnaires have
been employed in IS evaluation studies, these
instruments generally attempt to measure
the

This work was supported, in part, by the


University of South Florida Research and Creative
Scholarship Grant Program - Grant No. 1407 931 RO.
Additional support was provided by the University
of
South
Florida
College
of
Business
Administration Summer Grant Program.

0073-1129/90/0000/0437$01.OO0 1990 IEEE

431

THE SATISFACTION-PERFORMANCERELATIONSHIP

arrive at an explanation of what caused the


observed result. The explanation is necessarily
specific to the event itself and takes into
account a variety of factors, such as his past
history in similar situations and his knowledge of
how others have performed. Yet, despite the
variety of factors that may affect the judgment
process itself, empirical evidence indicates a
tendency for individuals to attribute achievementrelated results to a relatively limited set of
causes,
including. ability, effort, task
difficulty, and luck [42]. Furthermore, although
the attribution process may not be readily
apparent to the observer of human behavior, it is
believed that most people have fairly wellestablished attributional patterns that they
follow rather routinely [ 4 ] .

Ives et al. [ 2 5 ] trace the concept of user


satisfaction back to Cyert and March [ 9 ] , who
asserted that systems which provide direct
benefits to their users will produce feelings of
satisfaction with that system. Although the link
between
satisfaction
and
organizational
effectiveness is not explicitly defined, a
satisfied user will presumably use the system more
efficiently
or more
effectively
than
a
dissatisfied user and, as a result, increase his
level of performance
on behalf of the
organization.
Thus, user satisfaction has
traditionally been treated as an intervening
variable linking system quality to performance
outcomes (see Figure 1).

To facilitate prediction and testing of


achievement-related attributions, the perceived
causes of success and failure have typically been
viewed along two dimensions proposed by Weiner et
al. (431.
The first dimension, locus of
causality, may be internal or external, depending
on whether the cause or explanation for events
emphasizes factors that relate to the performer
himself versus factors that originate from
external sources.
Effort and ability are
considered to be internal factors, while task
difficulty, the acts of others, and luck or chance
are obviously external factors.

Although the Satisfaction-Performance model


is an intuitively appealing one, recent
psychological investigations in the area of
attribution theory suggest that performance may be
a cause, rather than a conseauence, of
satisfaction. Furthermore, attribution research
not only provides evidence to support a
Performance-driven model, but it also suggests
that an individual's affective response to a given
performance outcome is often highly subjective
and, in some cases, highly ego-defensive.

The second dimension, stability, refers to


the extent to which a causal factor fluctuates
over time or across repeated performances.
The
stability of causes is especially important from
the standpoint of future action:
If one attains success (or failure) and if
the conditions or causes are perceived as
remaining unchanged, then success (or
failure) will be anticipated with a greater
degree of certainty. But if the conditions
or causes are subject to change, then there
is some doubt that the prior outcome will
be repeated [40:9].

The Attribution Process


Attribution theory is. concerned with the
cognitive processes that individuals engage in to
explain performance in situations where causal
It has its
relations are ambiguous 1421.
theoretical roots in the seminal work of Heider
[ 2 2 , 2 3 ] who argued that a person's ability to
exercise control over his environment depends on
his ability to recognize causal relationships - to
identify events or actions that result in a
particular outcome or effect. The way in which an
individual responds to a given outcome, both in
terms of his feelings about the outcome and his
future actions, depends on the particular
inferences that are drawn regarding the cause( s ) .
Thus, the causal attributions that an individual
makes are important determinants of future
behavior. They provide the basis for subsequent
decisions about the actions required to bring
about either continuance or discontinuance of
those same effects (261.

Ability and task difficulty are regarded as stable


factors, while luck and mood are considered
unstable.
Effort is generally classified as
unstable, as people do vary their level of effort
over time.
Nevertheless, researchers recognize
that there is likely a level of effort that is
characteristic of or "normal" for a given
individual which might in some situations be
regarded as stable [ 4 ] .
A third dimension, controllability, was later
added to Weiner's classification scheme by
Rosenbaum [34] to reflect the performer's ability
to deliberately control a causal factor. Addition
of this causal dimension reflects researchers'
recognition that some causes, such as effort, are
volitionally controlled, while others like mood
may not be. To date, relatively little research
has focused on the controllability dimension and
its significance is therefore less than clear. In
this study, only the two causal dimensions
originally proposed by Weiner are considered.

Attributional judgments begin with an event


or performance outcome which is interpreted by the
individual as either a success or failure. Having
so classified the event or outcome, the individual
utilizes whatever information is available to

438

These two dimensions and the four most frequently


cited causal factors are shown in Figure 2.

Success

Attributions
Internal

Failure

External
Attributions

Performance
Locus of Causality

Stability
of Causes

STABLE

UNSTABLE

INTERNAL
Ability

EXTERNAL
Di;af%lty

Effort

I1

Figure 3 .

Relationship Between Performance Outcomea


and Causal Attributions

Luck

Expectations. Outcomes and Attributions


A number of researchers [12,13,17,28,29],
have also linked expectancy-value theory with
attribution theory, arguing that the way in which
a person evaluates his performance on a task is a
result of not only the outcome itself, but 3 1 . ~ 0
the individuals prior expectancy of success (or
failure)
In these studies, the experimenter
either manipulated or measuied piior expectancies,
giving participants false success or failure
feedback or having them rate their own abilities
and confidence prior to task performance [1,42].
Attribution data was gathered after the task was
completed and the outcome known.

Performance Outcomes and Causal Attributions


A
substantial body of
research has
systematically investigated the perceived causes
of success and failure in achievement-related
situations [41]. Numerous empirical studies
(e.g., see 2, 6 , 20, 27, 33, 35, 39, 44) provide
evidence that individuals typically attribute
their own successful performance to internal
factors such as their ability and effort, while
attributing poor performance to external factors
such as unclear instructions, the difficulty of
In other words, people
the task or bad luck.
generally accept less responsibility for failure
than for success [32].

These
investigations
have
typically
demonstrated that, where congruence exists between
an individuals expectations and the actual
outcome, attributions are more frequently made to
stable factors such as ability or task difficulty
[12,13,16]. This is not surprising given that an
individuals expectations about task performance
are probably based on his estimate of his own
abilities as well as his perceptions of task
difficulty. On the other hand, incongruence
between expectations and achieved outcomes leads
to more frequent attributions to unstable causes
such as luck and effort, perhaps because the other
factors contributing to performance (ability and
task difficulty) are relatively stable, at least
in the short run.
The relationship between
performance expectations, outcomes and causal
attributions is depicted in Figure 4 .

Many researchers believe that these hedonic


or self-serving attribution patterns imply a
subconscious tendency to enhance or protect selfesteem [5.8,21]. Others argue that self-serving
attributions may be the result of insufficient
information about the true reasons for a given
outcome, or logical errors in processing the
information that is available [30,31]. A recent
study by Riess et al. [33] provides evidence that
self-serving attributions may also be a result of
the individuals need to manage the impressions
others have of him.
Although further study is
required to resolve the question of & people
make self-serving attributions, the evidence is
nonetheless clear that successful performances
tend to produce internal, self-attributions,while
poor performances produce external, otherattributions as shown in Figure 3.

ATTRIBUTION THEORY AND USER SATISFACTION:


SOME HYPOTHESES
Because causal reasoning is a nearly
universal phenomenon, attribution theory has

Congruence

Stable
Attributions

Performance
Unstable
I n c o n x d Attributions

Figure

4.

Expectations, Performance Outcomes,


and Causal Attributions

439

proved useful in helping to explain a wide variety


of behaviors, including: how consumers respond to
product failure [14], how superiors evaluate
subordinates performance [ 3 5 ] , how organizations
respond to performance downturns [15] and how
corporate managers explain unexpected financial
results to their shareholders [3,38]. Attribution
theory also may help to explain how users evaluate
information systems, particularly systems which
appear to have a direct and immediate impact on
their performance. For example, individuals who
perform poorly on a task which involves the use of
an information system may be more inclined to
blame the results on poor system design, while
those who do well may discount the system's
contribution altogether. Differences between
users' expectations and outcomes may also bias the
evaluation.
In fact, Ginzberg [18] found that
users were less satisfied with their information
systems when the system as delivered was
significantly different from what they expected,
regardless of whether the system was better or
worse than anticipated. Thus expectation-outcome
incongruence can lead to unwarranted conclusions
about the value of the system, making it very
difficult to obtain objective assessments of
system quality from individual users.
Based
on
the
relationships
between
expectations, outcomes, and
attributions,
suggested by attribution theory and illustrated in
Figures 3 and 4, two pairs of (alternative)
hypotheses were formulated and tested. The first
hypothesis in each pair is derived directly from
attribution theory and is concerned with
differences in the total pattern of attributions
made by users who were successful in performing a
task versus those who were unsuccessful.
The
second hypothesis in the pair is a reformulation
of the first wherein the sole focus is on
differences in the attributions made to a single
causal factor, quality of the computer system,
which is classified as an external, stable causal
factor.
Hypotheses derived from Figure 3:
Hla: Individuals who
are
unsuccessful in
performing a computer-based task will make
stronger external attributions than will
those who are successful.
Hlb:

Individuals
who
are unsuccessful in
performing a computer-based task will make
stronger external attributions to the quality
of the computer system than will those who
are successful.

Hypotheses derived from Figure 4:


H2a: Individuals who experience congruence between
their expectations and outcomes (i.e. expect
to do well and do, or expect to do poorly and
do) will make stronger attributions to stable
factors than will those who experience
incongruence between expectations and actual
outcomes.

H2b: Individuals
who
experience congruence
between their expectations and outcomes (i.e.
expect to do well and do, or expect to do
poorly
and
do)
will
make
stronger
attributions to the computer system than will
those who experience incongruence between
expectations and actual outcomes.
In
addition
to
these
attributional
hypotheses, which are the primary focus of this
study, a third hypothesis attempts to link
attribution theory to user satisfaction research
by examining the relationship between performance
outcomes (success/failure) and the performers'
subjective assessments of the computer system:
H3: Individuals who perform well on a computerbased task will view the system as more
valuable than will those who perform poorly,
regardless of the quality of the information
system.
A positive finding with respect to Hypothesis 3
would seem to suggest that user satisfaction
ratings may in fact be quite subjective and, in
some instances, may be unrelated to any objective
measure of system quality.
A STUDY OF USER PERFORMANCE ATTRIBUTIONS
Experimental Procedures

---

The Task and Settinn. To test the research


hypotheses, eighty MBA students were recruited to
participate in a laboratory session for the stated
purpose of evaluating eight newly-developed,
mainframe-based, computer models.
Ten of the
volunteers were randomly assigned to each of eight
experimental groups and asked to solve a series of
standard accounting problems that had been covered
in a core MBA course.
Participants in each group
were asked to review 20 reported variances in
manufacturing costs and to decide in each case
whether to spend additional (MIS) funds to
investigate the variance situation using their
assigned computer model.
If a decision maker
chose to investigate but found nothing wrong with
the production equipment, the investigation costs
were charged to his MIS account. However, if the
decision maker failed to investigate and the
variance was due to equipment problems,
substantial excess manufacturing costs were
incurred.
Although the participants were led to believe
that they had been randomly assigned to test one
of eight computer models, in fact, no such
computer system existed. A simple microcomputer
system was used to present the same set of choices
to each participant and to calculate the cost of
the alternative selected. Thus, all participants
played the game with exactly the same set of tools
to assist them.
Participants were informed that their
performance would be evaluated relative to their
peers, based on the extent to which they were able
to minimize both manufacturing costs and MIS costs

unlikely that many of the respondents would


To
attribute their outcomes to "ability."
compensate for this missing factor and provide a
range of choices, "understanding of the' cost
structure" was included in the list of causes as a
surrogate for "ability."

(the costs of using the system).


Since total
costs were a function of the particular choices
made by each of the participants on 20 problems,
the range of actual performance outcomes was
potentially quite large, however, no attempt was
made to manipulate participants' decision choices
or the payoffs they received.
Experimental
sessions lasted approximately one hour and were
held over the course of two weeks. Each session
was limited to 10 students to further reinforce
the deception regarding the different models being
tested.
At the conclusion of each laboratory
session, participants
completed a
System
Evaluation form and were advised not to discuss
the exercise with others as revealing their
decision strategies could affect everyone's
payoffs.

Although the majority of attribution studies


have employed structured rating scales such as the
ones used in this study, there are situations in
which open-ended response measures may be more
desirable as they allow participants greater
latitude in responding. Nevertheless, structured
seales were used here because they have been shown
to have better intertest validity and reliability,
in part because no recoding of responses is
required [ll].
Measurinn Performance Expectations. At the
conclusion of each laboratory session, all
experimental subjects were required to complete a
System Evaluation questionnaire which asked them a
variety of questions about the exercise itself.
The System Evaluation Form also asked them to
indicate their expectations about their own
performance in each cost category relative to
their classmates (Where do you to think your Plant
Costs will fall relative to other participants in
this game? Top 25%. next 25%, etc.).
Based on
the answers given for the two cost categories,
each participant's expected pay was calculated by
applying the payoff scheme.

After all 80 participants had completed the


exercise, total costs were calculated and posted,
and a payoff time was scheduled. Participants
were paid based on their overall performance
relative to others who participated in the study:
those who focused on one set of costs at the
expense of the other (low-performers) were paid $5
for their participation, while those who
successfully minimized both sets of costs (highperformers) earned as much as $21, depending on
the total costs incurred.
Because performance
outcomes were a result of the actual decision
choices made by each participant, rather than an
experimental manipulation, the Success/Failure
determination required for hypothesis testing was
based on their final payoffs.
Twenty-nine
students earned between $ 6
and $21 for
successfully minimizing their total costs. These
students were congratulated on their successful
performance and asked to complete a Final
Evaluation form. The remaining 51 students were
told that, despite the fact that they had not done
well on the task, their participation was
appreciated and they would be paid $ 5 for trying.
The poor performers were also asked to complete
the Final Evaluation form. All experimental
participants were
fully de-briefed after
completing this final questionnaire.

w.

Measuring User PerceDtions of System


The primary intent of this study was to
investigate users' causal reasoning patterns in an
achievement-related setting, however, two simple
measures of perceived system value were included
to begin examining the relationship between
attributions and affect.

In order to gauge participants' reactions to


the (hypothetical) computer system prior to
determination of the outcomes, immediately after
completing the task participants were asked
whether or not they would consider buying the
computer software they tested if they were
responsible for manufacturing quality control.
This information was used as a baseline for
identifying changes in participants' assessments
of the system following publication of their
performance results. At payoff time, participants
were again asked to record their feelings about
the value of their computer systems by indicating,
on a 5-point scale, whether they thought the
system they tested had performed better or worse
than those of their competitors.

Assessing Causal Attributions.


The Final
Evaluation form that participants completed at
payoff time asked them to indicate, on a series of
7-point Likert scales, the extent to which they
believed their performance was affected by a
variety of different factors, including the amount
of effort expended, the quality of the computer
system used, how well they understood the cost
structure, good luck/bad luck, and the difficulty
of the task itself.

Unlike the scales that were used to assess


causal attributions, neither of these single-item
measures of perceived system value has been
validated in previous research, nor have they been
included in standard instruments designed to
measure user satisfaction.
However, since the
hypothesis being tested (H3) is concerned only
with identifying differences in the affective
response patterns of two groups of performers (as
opposed to arriving at a "true" measure of value),
this was not considered to be a serious
limitation.

As noted earlier, previous attribution


research has shown that achievement-oriented
attributions are most frequently made to effort,
ability, luck and task difficulty. However, in
order to attract a sufficient number of volunteers
for this study, the experimental task was
specifically chosen and described to prospective
participants as one requiring no particular
ability beyond an understanding of expected cost
calculations (which were reviewed as part o f the
task introduction).
AS a result, it seemed

441

Expectations, Outcomes and Attributions. The


second research hypothesis (H2a) predicted that
individuals whose expectations and performance
outcomes were
congruent would make more
attributions to stable causms than would those
whose expectations and performance outcomes were
incongruent.
To test this prediction, the
responses were divided into two groups based on
the relationship between expected pay and the
actual dollar payoff. Group 1 (n-27) consisted of
actual
those with congruent scores (expected pay
pay), while Group 2 (n-53) consisted of those with
incongqent scores (expected pay <> actual pay).
A MANOVA was once again run using the five causal
factors as dependent variables. The results of
this omnibus test were not significant at the .05
level (F(5.74) = 1.99, p = .090). Thus, the null
hypothesis of no attributional differences between
the two groups was not rejected.

Data Analysis
Performance Outcomes and Causal Attributions.
The first research hypothesis (Hla) predicted that
participants who failed would make stronger
performance attributions to external factors than
would those who succeeded.
For purposes of
hypothesis testing, respondents were divided into
two groups, SUCCESS and FAILURE, based on the
actual payoffs they earned.
Those who received
the $5 minimum wage (n-51) were considered to have
failed, while those who received $5 plus a
performance bonus (n=29) were considered to have
succeeded.

A MANOVA was run using SUCCESS/FAILURE as the


independent variable and the five causal factors,
effort, system quality, understanding, luck and
task difficulty, as the dependent variables. The
results o f omnibus testing were significant
(F(5,74) = 7.13, p < .01) indicating differences
in the performance attributions of the two groups.
Individual ANOVAs were then run for each of the
causal factors; the results are shown in Table 1.

The second level research hypothesis (H2b)


predicted that incongruence between expectations
and outcomes would lead to stronger attributions
to the quality of the computer system. ANOVA was
run to d e t m i n Y G h e t h e r significant differences
between the two groups existed with respect to
this causal factor considered .by itself.
This
test also yielded insignificant results (F(1,78) =
.20, p=(.658).

In general, the predictions of attribution


theory were borne out.
Individuals who were
successful made stronger performance attributions
to internal causes (effort and understanding) than
did those who failed to perform well.
On the
other hand, those who failed made stronger
performance attributions to external causes
(system quality and luck) than did those who were
successful. Only task difficulty produced results
in the opposite direction from that which was
predicted - the SUCCESS group made stronger
attributions to the external factor, task
difficulty, than did the FAILURE group.

Although the predictions derived from


attribution
theory
focus
solely
on
the
congruence/incongruence of expectations as a
determinant of attributions, grouping participants
on this basis results in the "incongruent" group
being comprised of both participants who made more
than they anticipated and participants who made
.less than they anticipated. Prior research has in
some cases produced different results for these
two different types of incongruence [18].
Furthermore, it seems equally plausible that
attributional differences may occur as a result of
disappointment with one's outcomes relative to
expectations (negative disconfirmation), rather
than as a result of outcomes that exceeded
expectations (positive disconfirmation). To test
this possibility, participants were regrouped so
that those whose payoffs were equal to or greater
than their expectations comprised Group 1 (n=41)
aad those whose payoffs were less than their
expectations comprised Group 2 (n=39). Again, the
MANOVA that looked at overall attributional

Table 1 also provides support for the


prediction (Hlb) that unsuccessful performers
would make stronger causal attributions to the
quality of the computer system than would those
who were unsuccessful.
This is a particularly
significant finding given that both groups solved
the same 20 problems using a fictitious computer
system. It would appear that the judgments made
by these users regarding the quality of the
information system were influenced more by the
outcome that ensued than by any inherent
characteristic of the system itself.

Table 1.. ANOVA Table

--- - - - - - - _ _ _ _ _ _ _ _ _ _ _ _ _ _

Outcome By Causal Factors

MEAN SCORES
INTERNAL FACTORS
Effort
Understanding

Success

Failure

4.90
4.69

3.70
3.90

15.74
3.92

.0002
.OS13

3.69
3.62
4.07

4.33
3.14
4.82

5.45
2.50
3.64

.0221
.1100
.0599

EXTERNAL FACTORS
Model Quality
Difficulty
Luck

442

both groups had a reasonably positive attitude


toward the computer systems after performing the
task, the group that performed poorly seems to
have developed more negative feelings about the
system after finding out the performance results.

patterns failed to produce significant results


(F(5,74) = 1.68, p < .16). However, the ANOVA.
with system quality as the dependent variable,
yielded significant results (F (1.78) = 6.17, p <
02). Examination of the mean scores of the two
groups indicated that participants who made less
than they had expected made stronger attributions
to system quality than did those whose payoffs
were equal to or greater than they had expected (X
= 4.44,x = 3.78 respectively).

'

Discussion
Consistent with the first research hypothesis
and with previous attribution research, the causal
factors noted by both successful and unsuccessful
performers tended to be hedonically biased.
In
general, the successful performers made stronger
attributions to internal factors (their own effort
and understanding),
while the unsuccessful
performers made stronger attributions to external
causes (the quality of the computer system and
luck). Task difficulty was the only causal factor
that failed to produce the anticipated outcomeneither of the
dependent attributions
performance groups viewed the difficulty of the
task as contributing significantly to their
outcomes.

Outcomes and Value Assessments . The third


research
hypothesis
(H3)
predicted
that
participants who were successful in performing the
task would have more positive feelings about the
value of the system, regardless of the quality of
that system (i.e., despite the fact that no
Two measures
computer system actually existed).
of affect were obtained in the course of the
experiment and used in testing the third research
hypothesis.
The first measure of perceived system value
provided a baseline measure of participants'
reactions to the hypothetical computer system
immediately after completing the task, but prior
to publication of the performance outcomes and,
hence, the attributional process. The mean score
for the entire participant pool was 3.74 (on a 7point scale, where 1 = Would Seriously Consider),
indicating a mildly positive response to the
question, "If you were quality control manager,
would you consider purchasing the software package
you tested to perform variance investigations?"

On the one hand, these findings strongly


suggest the possibility that user assessments of
system value can be significantly biased by the
outcomes that result from system-supported
activities, regardless of the actual contribution
of the system to those outcomes.
For example,
suppose that a financial analyst uses a DSS to
assist in the identification and evaluation of
several alternative decision strategies. If the
analyst enters incorrect parameters which result
in a poor recommendation, his evaluation of the
DSS will likely be more negative (and therefore
more self-serving) than it would have been had he
entered the correct parameters and achieved a
positive outcome. Although the system may have
had little or nothing to do with the outcome that
resulted, the analyst may be able to save face or
shift the blame by ascribing responsibility for
the negative outcome to a "faulty" system.

The responses were again split into two


groups, SUCCESS and FAILURE, based on the eventual
performance outcomes, to test for between group
differences in perceptions of system value prior
to publication of the results. Examination of the
group mean indicated that the FAILURE group had a
slightly more positive attitude toward the system
(x = 3.63) than the SUCCESS group (x = 3.93).
before they were told their scores.
The
significance of the observed difference was tested
using ANOVA and found to be non-significant
(F(1.78) = .69, p = .41), indicating that the
successful and unsuccessful participants had quite
similar feelings about the value of the
(hypothetical)
systems
immediately
after
completing the task.

On the other hand, Snyder et al. [36] argue


that the extent to which performance attributions
are hedonically-oriented is largely a function of
circumstances.
That is, more self-serving
attributions occur when noone else is likely to
dispute the performer's explanation and when one's
future performance is not likely to be subject to
the same level of scrutiny. If these researchers
are correct, the self-serving attributional
tendencies observed here may be somewhat
exaggerated, given the one-shot nature of the
experiment and the fact that the attributional
data
were
gathered
via
a
confidential
questionnaire.
Clearly, more evidence gathered
under varied circumstances is required before a
conclusion can be drawn regarding the effects of
performance outcomes on users' attributional
patterns.

After the performance outcomes and payoff


charts had been posted, participants evaluated the
contribution of the systems they had tested by
indicating, on a 5-point scale, how they thought
the system they used performed relative to the
systems used by other participants. The responses
were again divided into SUCCESS and FAILURE groups.
to test the third hypothesis.
This time, the
ANOVA results were significant (F(1.78) = 7.35, p
< .01). Inspection of the group means indicated'
that participants who were successful believed.
their systems to have been no better or worse than
On the other
those of their peers (x = 3.00).
hand, those who were unsuccessful believed that
the systems they used were in fact worse than
those of their peers (x = 2.61). Thus, although

Contrary to what was anticipated, no


significant attributional differences were
identified as a result of congruence/incongruence
between expectations and outcomes. One reason for
the lack of notable differences may have been the
fact that, although they had solved somewhat

443

similar problems manually, the participants had


never performed this particular task before and
they therefore had little information on which to
base their expectations.

in the system being blamed for things that have no


other evident explanation. Secondly, despite
efforts to thoroughly test them prior t o
installation, new technologies are often less than
completely reliable when first implemented. When
problems occur with a new installation, it is not
at all surprising to find that the cause is a
"bug" in the system.
If the problems persist,
however, users may begin to assume that every
unexpected occurrence is caused by a system
problem.
Thus, their natural tendencies to
attribute problems t o the system may be
exacerbated by new installation realities.

Although expectations did not have as strong


an effect as was predicted, those who experienced
negative disconfirmation (who expected to do
better than they did) made considerably stronger
causal attributions to the quality of the computer
system than did those whose outcomes were equal to
or better than expected.
This outcome is not
entirely consistent with the predictions of
attribution theory since positive disconfirmation
did not lead to attributions to unstable causes.
It
is,
however,
consistent
with
the
confirmation/disconfirmation paradigm which has
been used to explain consumer satisfaction.

This investigation also raise questions about


the causal arrow linking user satisfaction and
performance implied by previous IS research.
Although the findings reported here must be
subjected to further examination in other
settings, they suggest that the direction of
causation may in some cases be reversed performance outcomes may lead to feelings of
satisfaction or dissatisfaction for system users
when the relationship between system quality and
decision performance is not well understood.
These results suggest the need for a better
theoretical model of user satisfaction that
clearly specifies the expected relationship
between satisfaction and performance as well as
taking into account both the situational context
and other important variables, such as usage, that
may have relevance to the model. Without a strong
foundation in
theory,
measures
of
user
satisfaction will continue to be controversial and
difficult to interpret across studies.

Studies of the process by which consumers


develop feelings of either satisfaction or
dissatisfaction with a product have demonstrated
that product performance which exceeds a preestablished standard (e.g. prior expectations)
results in positive disconfirmation and feelings
of satisfaction with the product.
Performance
that falls short of the standard creates negative
disconfirmation and feelings of dissatisfaction
[7]. I n this experiment, participants who
experienced negative disconfirmation not only made
stronger causal attributions t o the computer
system, but they also altered their earlier
assessments of their computer systems, judging
their systems to be inferior to those of their
competitors after the performance results had been
published.
CONCLUSION

REFERENCES
The fact that users in this experiment tended
to discount the contribution of the computer
system when things went well and to blame the
system when things went poorly seems to suggest
that user satisfaction may be a less than adequate
surrogate for system effectiveness when the actual
contribution of the system is ambiguous or
difficult to quantify from the user's perspective.
For example, it may be particularly troublesome as
a surrogate for effectiveness when users are
inexperienced at performing the task in question,
do not have a good understanding of how the system
a c t m l l y works, or are otherwise unable to judge
the impact system use has had on their outcomes.
Expert systems and some types of decision support
systems that are specifically designed to aid
novice users may be especially vulnerable to the
problem of self-serving causal attributions if
users expect that the answers provided by the
system will necessarily be "right. "

Azjen, I. and Fishbein, M. (1983). "Relevance


and availability in the attribution process,"
in J. Jaspars, F. Finchan and M. Hewstone
(eds.)
Attribution Theory and Research:
and Social
ConceDtual,
Dimensions, (London: Academic Press): 63-89.
Arkin, R., Appelman, D. and Burger, J . (1980)
"Social anxiety, self-presentation, and the
self-serving bias in interpersonal inference
situations," Journal of Personality
Social Psvcholony, 38: 23-35.
Bettman, J.
and Weitz, B. (1983).
"Attributions in the board room: Causal
reasoning in corporate annual reports,"
Administrative Science Quarterlv, 28: 165183.
Birnberg, J.,
Frieze, I. and Shields, M.
(1977). "The role of attribution theory in
control systems," Accounting. Organizations
and Society, 2, 3: 189-200.

New technologies may also be problematic from


an evaluation standpoint, given the natural
tendencies of those who perform poorly to make
external causal attributions. First of all, the
use of new technologies may create a situation
that is high in ambiguity for the users, if they
are unable to clearly associate specific outcomes
with specific causes.
As a result, users may
quickly develop attributional patterns that result

Bradley, G . (1978). "Self-serving biases in


the attribution process : A re-examination of
the fact or fiction question," Journal of
Personality and Social Psychology. 36: 56-71.

444

161

[I91 Goodhue, D.
(1986). "IS attitudes: Toward
theoretical
and
definition
clarity,"
proceedings of the Seventh International
Conference on Information Systems, San Diego,
CA: 181-194.

Bradley-Weary, G. (1978). "Self-serving biases


in the attribution process : A re-examination
of the fact or fiction problem," Journal of
Personality and Social Psychology, 36, 56-71.

[7] Cadotte, E., Woodruff, R. and Jenkins, R.


(1987). "Expectations and norms in models of
consumer satisfaction," Journal of Marketing
Research, 24, 8: 305-314.

[E] Covington, M . , Spratt, M. and Omelich, C.

[20] Hamilton, V., Blumenfeld, P. and Kushler, R.


(1988).
"A
question
of
standards:
Attributions of blame and credit for
classroom acts," Journal of Personality and
Social Psycholonv, 54, 1 : 34-48.

(1980). "Is effort enough or does diligence


count too? Student and teacher reactions to
effort stability in failure," Journal of
Educational Psychology, 72: 717-729.

[21] Harvey, J .
and
Weary, G .
(1981).
Perspectives on Attributional Processes,
(Dubuque. IA: Wm. C. Brown).
[22] Heider, F. (1944).
"Social perception and
phenomenal causality," Psycholonical Review,
51: 358-384.

[9] Cyert, R. and March, J. (1963). A Behavioral


Theory of the Firm,, (Englewood Cliffs, NJ:
Prentice-Hall).

Psycholo
of
I231 Heider, F.
(1958). The
Interpersonal Relations. (New York: WigleyC

[lo] Davis, J.
and
Srinivasan, A. (1988)
"Incorporating
user
diversity
into
information systems assessment," in N. BjornAndersen and G . Davis (eds), Information
Systems Assessment, (Amsterdam: NorthHolland): 83-98.

[24] Hirscheim, R. and Smithson, S . (1988). "A


critical analysis of information systems
evaluation," in N. Bjorn-Andersen and G .
Davis (eds), Information Systems Assessment,
(Amsterdam: North-Holland): 17-37.

[ll] Elig, T. and Frieze, I. (1979). "Measuring


causal
attributions for
success and
failure," Journal of Personalitv and Social
Psycholo=, 37, 4: 621-634.

[25] Ives, B., Olson, M. and Baroudi, J. (1983).


"The measurement of user information
satisfaction," Communications of the ACM, 26,
10: 785-793.

[12] Feather, N. and Simon, J. (1971). "Causal


attributions for success and failure in
relation to expectations of success based
upon selective or manipulative control,"
~Journal of Personality, 39: 527-541.

[26] Kelley, H. (1973). "The processes of causal


attribution," American Psychologist, 28: 107128,

[13] Feather, N. and Simon, J. (1972). "Causal


attributions for success and failure in
relation to initial confidence and success
and failure of self and other," Journal of
Personality and Social Psychology, 18: 173-188

[271 Kelley, H.
and
Michela, J.
(1980).
"Attribution theory and research," in M.R.
Rosenzweig and L.M. Porter (eds) Annual
Review of Psvcholonr, (Palo Alto, CA: Annual
Review, Inc.): 457-501.

[14] Folkes, V. (1984). "Consumer reactions to


product failure: An attributional approach,
Journal of Consumer Research, 10: 398-409.

[28] Kovenklioglu, G. and Greenhaus, J. (1978).


"Causal attributions, expectations and task
performance," Journal of Applied Psycholo=,
63: 698-705.

'I

[15] Ford, J. (1985).


"The effects of causal
attributions on decision makers' responses to
performance downturns," Academy of Mananement
Review, 10, 4: 770-786.

[29] McMahon, I. (1973).


"Relationships between
causal attributions and expectancy of
success, Journal of Personality and Social
Psycholony, 28, 1 : 108-114.
'I

[16] Frieze, I. (1976). "Causal attributions and


information seeking to explain success and
failure," Journal of Research in Personality,
10, 293-305.

[30] Miller, D.
(1976).
"Ego involvement and
attributions for success and failure,"
Journal of Personality and Social Psychology,
34: 901-906.

[17] Frieze, I. and Weiner, B.


(1971). "Cue
utilization and attributional judgments.for
success and failure," Journal of Personality,
39: 591-605.

[31] Miller, D. and


Ross, M. (1975). "Selfserving biases in the attribution of
causality," Psycholonical Bulletin, 82: 213225,

[181 Ginzberg, M.
(1981).
"Early diagnosis of
MIS implementation failure: Promising results
and unanswered
questions," Management
Science, 27, 4.

[32] Pyszxzynski, T. and Greenberg, J. (1985).


"Social comparison after success and failure:
Biased search for. information consistent with
a self-serving conclusion," Journal of
Experimental Social Psychology, 21: 195-211.

445

[33] Riess, M., Rosenfeld, P., Melburg, V. and


Tedeschi,
J.
(1981).
"Self -serving
attributions: Biased private perceptions and
Journal of
distorted public descriptions
Personality and Social Psvcholony, 41:224-231.

."

[341 Rosenbaum, R. (1972). 4 Dimensional Analysis


of the Perceived Causes of Success and
Failure. Unpublished Doctoral Dissertation,
University of Calif., Los Angeles.
[351 Shields, M.,
Birnberg, J. and Frieze, I.
(1981). "Attributions, cognitive processes
and
control
systems, "
Accountinn
Ornanizations
Society, 6, 1: 69-93.
[36] Snyder, M..
Stephan, W. and Rosenfield, D.
(1978). "Attributional egotism," in J.
Harvey, W. Ickes, and R. Kidd (eds), New
Directions
in Attribution
Research,
(Hillsdale, NJ: Lawrence Erlbaum): 91-117.
[37] Srinivasan, A. (1985). "Alternative measures
of system effectiveness: Associations and
implications," MIS Quarterly, 9, 3: 243-253.
[38] Staw, B.,
McKechnie, P. 'and Puffer, S .
(1983). "The justification of organizational
Administrative
Science
performance, "
Quarterly, 28: 582-600.
[39] Streufert, S. and
Streufert, S.C. (1969).
"Effects of conceptual structure, failure and
success on attribution of causality and
interpersonal
attitudes,"
Journal
of
Personality
Social Psvcholony, 11: 138147.
[40] Weiner, B. (1979). "A theory of motivation
for some classroom experiences," Journal of
Educational Psychology, 71: 3-25.
[41] Weiner, B. (1985). "An attributional theory
of achievement mot ivqt ion and emotion, "
Psycholonical Review, 92, 4: 548-573.
[42] Weiner, B. (1986).
Attributional Theory
Motivation, (New York: Springer-Verlag).

of

[43] Weiner, B., Frieze, I., Kukla, A., Reed, L.,


Rest, S . and Rosenbaum, R. (1971). Perceiving
the Causes of Success and Failure, (New York:
General Learning Press Module).
[44] Zuckerman, M. (1979). "Attribution of success
and failure revisited, or: The motivational
bias is alive and well in attribution
theory," Journal of Personality, 47: 245-287.

446

You might also like