Professional Documents
Culture Documents
JOURNAL
OF OPERATIONAL
RESEARCH
ELSEVIER
Research
Systems Department,
Miami,
Decision
& Information
Marketing
FL 33199,
Science Department,
Department,
Fairleigh
Oakland
Dickinson
University,
IJniversiQ,
Florida
International
University,
USA
Rochester,
MI 4U309. USA
Teaneck, NJ 07666,
USA
18February 1997
Abstract
Several methods have been proposed for solving multi-attribute decision making problems (MADM). A major criticism
of MADM is that different techniques may yield different results when applied to the same problem. The problem
considered in this study consists of a decision matrix input of N criteria weights and ratings of L alternatives on each
criterion. The comparative performance of some methods has been investigated in a few, mostly field, studies. In this
simulation experiment we investigate the performance of eight methods: ELECTRE, TOPSIS, Multiplicative Exponential
Weighting (MEW), Simple Additive Weighting (SAW), and four versions of AHP (original vs. geometric scale and right
eigenvector vs. mean transformation
solution). Simulation parameters are the number of alternatives, criteria and their
distribution. The solutions are analyzed using twelve measures of similarity of performance. Similarities and differences in
the behavior of these methods are investigated. Dissimilarities in weights produced by these methods become stronger in
problems with few alternatives; however, the corresponding final rankings of the alternatives vary across methods more in
problems with many alternatives. Although less significant, the distribution of criterion weights affects the methods
differently. In general, all AHP versions behave similarly and closer to SAW than the other methods. ELECTRE is the least
similar to SAW (except for closer matching the top-ranked alternative), followed by MEW. TOPSIS behaves closer to AHP
and differently from ELECTRE and MEW, except for problems with few criteria. A similar rank-reversal experiment
produced the following performance order of methods: SAW and MEW (best), followed by TOPSIS, AHPs and ELECTRE.
It should be noted that the ELECTRE version used was adapted to the common MADM problem and therefore it did not
take advantage of the methods capabilities in handling problems with ordinal or imprecise information. 0 1998 Elsevier
Science B.V.
Keywords:
Decision
1. Introduction
Multiple
to making
* Corresponding
author.
zanakis@servms.fiu.edu.
0377-2217/98/$19.00
PII
SO377-2217(97)00147-l
Fax:
+ I-305-348-4126;
1998 Elsevier
e-mail:
usually conflicting
criteria. MCDM problems are
commonly categorized as continuous or discrete, depending on the domain of alternatives. Hwang and
Yoon (1981) classify them as (i) Multiple Attribute
Decision Making (MADM), with discrete, usually
limited, number of prespecified alternatives, requiring inter and intra-attribute
comparisons,
involving
508
S.H. Zanakis et al. / European Journal ofOperational Research 107 (1998) 507-529
implicit or explicit tradeoffs; and (ii) Multiple Objective Decision Making (MODM), with decision variable values to be determined
in a continuous
or
integer domain, of infinite or large number of choices,
to best satisfy the DM constraints, preferences or
priorities. MADM methods have also been used for
combining
good MODM solutions based on DM
preferences (Kok, 1986; Kok and Lootsma, 1985).
In this paper we focus on MADM which is used
in a finite selection or choice problem. In literature, the term MCDM is often used to indicate
MADM, and sometimes MODM methods. To avoid
any ambiguity we would hence forth use the term
MADM when referring to a discrete MCDM problem. Methods involving only ranking discrete alternatives with equal criteria weights, like voting
choices, will not be examined in this paper.
Churchman et al. (1957) were among the earlier
academicians
to look at the MADM problem formally using a simple additive weighting method.
Over the years different behavioral scientists, operational researchers and decision theorists have proposed a variety of methods describing how a DM
might arrive at a preference judgment when choosing
among multiple attribute alternatives. For a survey of
MCDM methods and applications see Stewart (1992)
and Zanakis et al. (1995).
Gershon and Duckstein (1983) state that the major
criticism of MADM methods is that different techniques yield different results when applied to the
same problem, apparently under the same assumptions and by a single DM. Comparing 23 cardinal
and 9 qualitative aggregation methods, Voogd (1983)
found that, at least 40% of the time, each technique
produced a different result from any other technique.
The inconsistency
in such results occurs because:
(a> the techniques use weights differently in their
calculations;
(b) algorithms differ in their approach to selecting
the best solution;
cc> many algorithms attempt to scale the objectives, which affects the weights already chosen;
(d) some algorithms introduce additional parameters that affect which solution will be chosen.
This is compounded
by the inherent differences in
experimental conditions and human information pro-
cessing between DM, even under similar preferences. Other researchers have argued the opposite;
namely that, given a type of problem, the solutions
obtained by different MADM methods are essentially the same (Belton, 1986; Timmermans
et al.,
1989; Karni et al., 1990; Goicoechea et al., 1992;
Olson et al., 1995). Schoemaker and Waid (1982)
found different additive utility models produce generally different weights, but predicted equally well
on the average. Practitioners seem to prefer simple
and transparent methods, which, however, are unlikely to represent weight trade-offs that users are
willing to make (Hobbs et al., 1992).
The wide variety of available techniques, of varying complexity and possibly solutions, confuses potential users. Several MADM methods may appear to
be suitable for a particular decision problem. Hence
the user faces the task of selecting the most appropriate method from among several alternative feasible
methods.
The need for comparing MCDM methods and the
importance of the selection problem were probably
first recognized by MacCrimmon
(1973) who suggested a taxonomy of MCDM methods. More recently several authors have outlined procedures for
the selection of an appropriate MCDM method such
as Ozernoy (1992), Hwang and Yoon (1981), Hobbs
(1986), Ozernoy (1987). These classifications
are
primarily driven by the input requirements
of the
method (type of information that the DM must provide and the form in which it must be provided).
Very often these classifications
serve more as a tool
for elimination
rather than selection of the right
method. The use of expert systems has also been
advocated for selecting MCDM methods (Jelassi and
Ozernoy, 1988).
Our literature search revealed that a limited number of works has been done in terms of comparing
and integrating the different methods. Denpontin et
al. (1983) developed a comprehensive
catalogue of
the different methods, but concluded
that it was
difficult to fit the methods in a classification schema
since decision studies varied so much in quantity,
quality and precision of information.
Many authors
stress the validity of the method as the key criterion
for choosing it. Validity implies that the method is
likely to yield choices that accurately reflect the
values of the user (Hobbs et al., 1992). However
S.H. Zunakis
et al/European
Journal
of Operational
Research
509
510
Triantaphyllou
and Mann (1989) simulated random AHP matrices of 3-21 criteria and alternatives.
Each problem
was solved using four methods:
Weighted
sum model (WSM), weighted product
model (WPM), right-eigenvector
AHP and AHP revised by normalizing each column by the maximum
rather the sum of its elements, according to Belton
and Gear (1984) suggestion for reducing rank reversals. Solutions were compared against the WSM
benchmark
and rate of change in best alternative
when a nonoptimal alternative is replaced by a worse
one. They concluded that the revised AHP appears to
perform closest to the WSM; AHP tends to behave
like WSM as the number of alternatives increases;
and that the rate of change does not depend on the
number of criteria.
The first two studies are limited to a single AHP
matrix; i.e. different methods for deriving weights
only for the criteria or only for the alternatives under
a single criterion - not simultaneously
for the entire
MADM problem. And all three are limited to variants of the AHP. A further limitation of the third
study is that it employs only two measures of performance: The percentage
contradiction
between
a
methods rankings to WSM, and the rate of rank
reversal of top priority. There is clearly a need for a
simulation study comparing also other MADM type
methods, using various measures of performance.
Our work in that regard is explained in the next
section. The MADM problem under consideration is
depicted by the following DM matrix of preferences
for m alternatives rated on n criteria:
Criterion
c,
c2
...
cJ
...
cN
rll
rt*
...
rIj
...
rl
r21
rz2
...
rlj
...
r2N
r 11
ri2
...
rij
...
riN
rL2
.
...
rLj
. ._
LN
Alternative
1
r,
TLI
2. Methods compared
Of the many MADM methods available we have
chosen the following five for comparison
in our
research, when applied to solve the same problem
with the decision matrix information stated earlier:
1. Simple Additive Weighting (SAW): Si = Cjcjri,.
2. Multiplicative Exponent Weighting (MEW ): Si =
n, rz.
3. Analytic Hierarchy Process (AHP) - four versions.
4. ELECTRE.
5. TOPSIS (Technique for Preference by Similarity
to the Ideal Solution).
The rationale for selection has been that most of
these are among the more popular and widely used
methods and each method reflects a different approach to solve MADM problems. SAWs simplicity
makes it very popular to practitioners (Hobbs et al.,
1992, Zanakis et al., 1995). MEW is a theoretically
attractive contrast against SAW. However, it has not
been applied often, because of its practitioner-unattractive mathematical
concept, yet in spite of its
scale invariant property (depends only on the ratio of
ratings of alternatives). TOPSIS (Hwang and Yoon,
1981) is an exception in that it is not widely used;
we have included it because it is unique in the way it
approaches the problem and is intuitively appealing
and easy to understand. Its fundamental premise is
that the best alternative,
say ith, should have the
shortest Euclidean
distance S,! = [C(rij - r,?)2]12
from the ideal solution (r,?, made up of the best
value for each attribute regardless of alternative) and
S.H. Zmakis et al./ European Journal of Operational Research 107 (1998) 507-529
e0.5
el.O
e1.5
e2.0
e2.5
e3.0
e3.5
e4.0
511
additive aggregation to global weights. Normalization of the decision matrix is necessary to handle
different types of attributes (e.g. benefits vs. costs) in
all methods, except ELECTRE which can also handle ordinal or descriptive (imprecise)
information
and criteria importance not adding up to one. TOPSIS uses the Euclidean norm to normalize the decision matrix, while the regular AHP normalizes
weights by dividing them by their sum. ELECTREs
output differs from the other methods, in that it does
not provide a global preference of alternatives, but a
partial (sometimes complete) ranking of alternatives.
In that sense, ELECTRE results can be compared to
the final ranking of alternatives
produced by the
other methods.
This common
denominator
approach overlooks some of the ELECTREs advantages of dealing with different or less precise situations via binary relationships.
However, it is of
interest in building computerized evaluation and DSS
(Pomerol, 1993) for handling the common problem
defined by the earlier decision matrix; namely, a
decision matrix of explicitly rated alternatives and
criteria weights. Many MCDA methods have been
developed over the years, but little is known about
their relative merits on similar problems. Surveys of
MCDM research status point to needs of more validation studies, choice of aggregation
procedures
based on problem characteristics, as well as simple,
understandable,
and usable approaches for solving
MCDM and MAUT problems (Dyer et al., 1992;
Stewart, 1992).
The methods examined in this experiment have
been contrasted in field studies by other researchers.
Olson et al. (1995) used a single problem to examine
how a group of students used and compared software
implementing MAUT, SAW, AHP and ZAPROS - a
procedure of ordinal tradeoffs with additive value
function, whose parameters are not explicitly determined. Several other field studies (but no simulation
study) have compared ELECTRE to one or more of
the other methods. Karni et al. (1990) concluded that
ELECTRE, AHP and SAW rankings did not differ
significantly in three real life case studies. Lootsma
(1990) contrasted AHP and ELECTRE as representing the American and French schools in MCDA
thought found to be unexpectedly
close to each
other. In extensive field studies Hobbs et al. (1992)
and Goicoechea et al. (1992) had graduate students
512
S.H. Zmukis
et d/European
3. Simulation experiment
According to Hobbs et al. (1992) a good experiment should satisfy the following conditions:
(a) Compare methods that are widely used, represent divergent philosophies of decision making or
claimed to represent important methodological improvements.
(b) Address the question of appropriateness,
ease
of use and validity.
(c) Well controlled,
uses large samples and is
replicable.
(d) Compares methods across a variety of problems.
(e) Problems involved are realistic.
Our simulation
experiment
satisfies all conditions
except the second one.
Computer simulation was used for the purpose of
comparing
the MADM methods. The reason for
using simulation was that it is a flexible and versatile
method which allows us to generate a range of
problems, and replicate them several times. This
provides a vast database of results from which we
can study the patterns of solutions provided by the
different methods.
The following
simulation:
parameters
were chosen
for our
S.H. Zanakis et ul. / Eurc~peun Joumul of Operutionul Reseurch 107 (1998) 507-529
513
WRC=
5 W,R.s*w- R,.blETHv
5Y
i=l
<=I
where
W,=L+
W, = l/i
1 -i
i= 1,2 ,...,
i = 1,2,. . ,L
forWRC1
for WRC2.
514
Journal
4. Analysis of experimental
results
Table 1
Summary
of ANOVA
L
V
METH
N
L*V
L*METH
N*L
V * METH
N*V
N+METH
N*L*V
N* L*METH
N*V*MJTH
L* V*METH
N*L*V*h4ETH
significance
KWC
MATCH%
WRCl
WRC2
SRC
MSER
MAER
MSEW
MAEW
UW
UR
0.0001
0.0001
0.0019
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0410
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0058
0.0001
0.0498
0.0002
0.0001
0.0373
0.0001
0.0001
O.cOOl
0.0001
0.0001
0.0001
0.0001
0.0001
0.0025
0.0001
0.0030
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0015
0.0001
-
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0998
0.0001
0.0503
0.0001
0.0001
0.0607
O.oool
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0155
0.0001
0.0204
0.0002
0.0001
0.0001
0.0001
o.ooo1
0.0001
0.0001
0.0001
0.0010
0.0787
0.0001
0.0013
0.0001
-
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
o.Oc01
0.0079
0.0577
0.0001
0.0004
0.0329
-
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0138
0.0001
0.0001
0.0001
-
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0094
0.0001
0.0253
-
0.0001
0.0001
0.0001
0.0001
0.0001
0.0410
0.0001
0.0071
of ANOVA
significance
L
V
METH
N
L*V
L * METH
N*L
V * METH
N*V
N*METH
N*L*V
N* L*METH
N* V*METH
L* v*METH
N* L*V*METH
Journal
515
MATCH%
WRCl
WRC2
SRC
MSER
MAER
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0226
O.OOQl
0.0055
0.0001
0.0001
0.0001
O.OCQl
0.0001
0.0753
0.0001
0.0001
0.0001
0.0039
0.0001
0.0077
0.0001
0.005 1
0.0001
0.0181
0.0001
0.0001
0.0001
0.0001
O.OQOl
O.CQOl
0.0030
O.oool
0.0185
0.0001
0.0001
0.0001
O.OOQl
0.0001
0.0001
0.0001
0.007 1
0.0001
0.0126
0.0001
0.0796
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
O.OOfll
0.0001
0.0006
0.0001
0.0110
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0089
0.0001
0.0001
0.0001
0.0041
0.0001
0.0161
0.0001
0.0004
0.0001
0.0175
0.0146
0.0001
0.0261
0.0001
0.0433
Table 3
Summary
Alternatives
Criteria
Distribution
Method
of Kruskal-Wallis
nonparametric
ANOVA significance
levels
SRC
MSER
MAER
UR
WRCl
WRC2
MAEW
MSEW
UW
KWC
MATCH%
O.oool
o.ooo3
0.0473
0.0001
0.0001
0.0006
0.0151
0.0001
0.0001
0.0004
0.0177
0.0001
0.0001
0.0004
0.0518
0.0001
0.0001
0.0010
0.0260
0.0001
0.0001
0.0005
0.0464
O.oool
0.0001
0.0001
O.OtlOl
0.0001
0.0001
0.0001
O.oool
0.0001
O.OQOl
0.0001
0.0001
0.0001
O.Oc@l
0.0002
0.1021
0.0001
0.0001
O.OflOl
0.0234
0.0001
516
Table 4
Summary
Alternatives
Criteria
Distribution
Method
of Kruskal-Wallis
nontwametric
ANOVA
significance
SRC
MSER
MAER
WRC 1
WRC2
MATCH%
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.65 I
2
5
Method
517
test on differences
WRC2
WRC I
SRC
Methods
Mean
Tukey
Mean
Tukey
Mean
Tukey
0.8967
0.8992
0.8969
0.8992
0.8045
0.8921
0.8078
A
A
A
A
B
A
B
0.362 1
0.3507
0.3626
0.3500
0.6278
0.4047
0.7267
D
D
D
D
B
C
A
0.3253
0.3142
0.3258
0.3138
0.5726
0.3723
0.686 I
D
D
D
D
B
C
A
KWC
MAEW
MSEW
Methods
Mean
Tukey
Mean
Tukey
Mean
Tukey
0.8257
0.8280
A
A
O.OQOl7
0.00019
B
B
0.8257
0.827 1
0.7329
0.7764
A
A
C
B
0.ooo17
0.00019
0.00074
o.OOQ77
B
B
A
A
0.0085
0.0087
0.0084
0.0087
0.0194
0.0158
C
C
C
C
A
B
ELECTRE
MSER
MAER
uw
Methods
Mean
Tukey
Mean
Tukey
Mean
Tukey
0.4972
0.4784
0.4974
0.4779
1.1820
0.6747
1.2132
C
C
C
C
A
B
A
0.3590
0.348 1
0.3592
0.3474
0.6376
0.4093
0.7250
D
D
D
D
B
C
A
0.023
0.0236
0.0232
0.0235
0.0565
0.0416
C
C
C
C
A
B
UR
TOP
MATCH%
Methods
Mean
Tukey
Mean
Tukey
Mean
Tukey
0.0663
0.0647
0.0663
0.0646
0.1055
0.0690
0.1168
CD
D
0.8215
0.8246
0.8206
0.8254
0.7548
0.7549
0.9035
B
B
B
B
C
C
A
0.6910
0.6966
0.6908
0.6950
0.567 1
0.6343
0.3537
A
A
A
A
C
B
D
CD
D
B
C
A
average difference
S.H. Zanakis et al./ European Journal of Operational Research 107 (1998) 507-529
518
0.035
0.03
0.025
B
P
0.02
E
t
z
0.015
z
P
0.01
t
f
OC
Method
Effect of number of criteria (N): Most performance measures (MAER, MSER, SRC, KWC, UR,
WRCI, WRC2) for most methods changed slightly
with N, but significantly according to ANOVA. This
any other method, resulting in larger WRCs, regardless of the number of alternatives. The change in L
affects each AHP version the same way. See Figs.
1-6.
.~_
i.2
r-----
+3
L,
+5
-A-7
*9
Method
Fig.
519
S.H. Zanakis et al. / European Journal of Operational Research 107 (1998) 507-529
0.1 -
0.1
520
Journal of Operational
-~__
5
Method
weights of alternatives,
as implied by somewhat
smaller KWC. However, differences
in the final
weights for alternatives were larger in problems with
fewer criteria, as proven by increased
MAEW,
MSEW, UW and lower KWC. TOPSIS behaved
0 025
P
P
0.02
L
$
0
ii 0.015
d
J
I
5
0.01
0.005
5
Method
Journal
qf Operational
521
09
0.6
0.7
0.3
~~
Method
0.9
,
2 0.5
f
op0.5
522
..-____
0.9 i
0.6
0.7 -P
f
0.8 --
d
5
E 0.5 -8
s
e
0.4 1
0.3
0.2
0.1
I__
--_-____~-_-_+
2
Method
(VW,
0.6
native weight differences between methods. Surprisingly, however, final weight dissimilarities
between
methods were higher under the uniform than beta
0.7 i
0.6
e
i
g 0.5
5
i
E
o.4
0.3
0.2
0.1
o--.
523
distribution. In the case of AHP, the uniform distribution differentiates slightly more its final rankings
and weights from SAW when using the original
scale rather than the geometric scale. TOPSIS final
rankings differ from those of SAW more (least)
under the beta (equal constant) distribution. ELECTRE and MEW methods differentiate
their final
Similar analyses were performed on the rank reversa1 experimental results. Here each method results
(uni-
Method
524
reveal that all factors (number of alternatives, number of criteria, distribution and method), and most of
their interactions,
are highly significant (Tables 2
and 4).
0.6
07
06
@
'I
I
0' 0.5
Method
measures
WRCl
Methods
Mean
Tukey
Mean
Tukey
Mean
Tukey
SAW
AHP, Original, eigen
AHP, Geometric, eigen
AHP, Original, MTM
AHP, Geometric, MTM
MEW
TOPSIS
ELECTRE
I.0
0.9530
0.9499
0.9560
0.95 1 I
1.0
0.9692
0.9356
A
C
C
C
C
A
B
D
0
0.1532
0.1595
0.1520
0.1610
0
0.1116
0.2138
D
B
B
B
B
D
C
A
D
B
B
B
B
D
C
A
MSER
0.1361
0.1421
0.1351
0.1446
0
0.097
0.1996
MAER
Tukey
Methods
Mean
SAW
AHP, Original, eigen
AHP, Geometric, eigen
AHP, Original, MTM
AHP, Geometric, MTM
MEW
TOPSIS
ELECTRE
0.1752
0.1854
0.1740
0.1820
0
0.1379
0.3479
0.1522
0.1581
0.1515
0.1568
0
0.1104
0.2347
525
Mean
TOP
Tukey
MATCH%
Mean
Tukey
1.o
0.9258
0.9235
0.9258
0.9165
1.0
0.9531
0.4402
average difference
Mean
0.2
d5
L
0
i
2
0.15
I
4
5
g
01
0.05
0
1
Tukey
1.o
0.8584
0.8544
0.8590
0.855 1
1.0
0.9005
0.7501
526
Journal
qf Operational
0.7
z
5
0.6 t
/
I--
9
5
L
i !Z:'
0.5 i
-A-l
-*9
5
.E
B
i
0.4
03
0.2
0.1
0
k -~--~-~~
~~
-c-
----_t------~r
~~~~
Method
by number of alternatives.
reversal
performance
measures
(larger
TOP,
MATCH%,
SRC, and smaller RMSER, RMAER,
WRCl and WRC2). The rank reversal performance
of each AHP version was statistically not different
03
0.25
L
I
0.2 j
+5
+10
d-15;
.-rt20/
European Journal
qf Operational
of number
of alternatives
(L)
Effect of distribution
This simulation
experiment
evaluated
eight
MADM methods (including four variants of AHP)
under different number of alternatives CL), criteria
(N) and distributions. The final results are affected
by these three factors in that order. In general, as the
number of alternatives increases, the methods tend to
produce similar final weights, but dissimilar rankings, and more rank reversals (fewer top rank reversals for ELECTRE). The number of criteria had little
effect on AHPs, MEW and ELECTRE.
TOPSIS
rankings differ from those of SAW more when N is
on rank reversal:
V
3
on
weights (V)
The number
the number
tives. For
(all) ranks
less of the
of criteria
rank reversal:
on rank
527
Method
528
References
Belton, V., 1986. A comparison of the analytic hierarchy process
and a simple multi-attribute value function. European Journal
of Operational Research 26, 7-2 I
Belton, V., Gear, T., 1984. The legitimacy of rank reversal - A
comment. Omega 13, 143-144.
Buchanan, J.T., Daellenbach, H.G., 1987. A comparative evaluation of interactive solution methods for multiple objective
decision models. European Journal of Operational Research
29, 353-359.
Churchman, C.W., Ackoff, R.L., Amoff, E.L., 1957. Introduction
to Operations Research. Wiley, New York.
Currim, I.S., Satin, R.K., 1984. A comparative
evaluation of
multiattribute consumer preference models. Management Science 30, 543-561.
Denpontin, M., Mascarola, H., Spronk, J., 1983. A user oriented
listing of MCDM. Revue Beige de Researche Operationelle
23, 3-11.
Dyer, J., 1990. Remarks on the analytic hierarchy process. Management Science 36, 249-258.
S.H. Zanakis
er ul. / Europeun
Journd
Dyer. J., Fishbum, P., Steuer, R., Wallenius, J., Zionts, S., 1992.
Multiple criteria decision making, multiattribute utility theory:
The next ten years. Management Science 38, 645-654.
Gemunden, H.G., Hauschildt, J., 1985. Number of alternatives
and efficiency in different types of top-management
decisions.
European Journal of Operational Research 22, 178- 190.
Gershon, M.E., Duckstein, L., 1983. Multiobjective approaches to
river basin planning. Journal of Water Resource Planning 109,
13-28.
Goicoechea, A., Stakhiv, E.Z., Li, F., 1992. Experimental evaluation of multiple criteria decision making models for application to water resources planning. Water Resources Bulletin 28,
89- 102.
Gomes, L.F.A.M., 1989. Comparing two methods for multicrite,ria
ranking of urban transportation system alternatives. Journal of
Advanced Transportation 23, 217-219.
Harker, P.T., Vargas, L.G., 1990. Reply to Remarks
on the
analytic hierarchy process by J.S. Dyer. Management Science 36, 269-273.
Hobbs, B.F., 1986. What can we learn from experiments
in
multiobjective decision analysis. IEEE Transactions on Systems Management and Cybernetics 16, 384-394.
Hobbs, B.J., Chankong,
V., Hamadeh, W., Stakhiv, E., 1992.
Does choice of multicriteria method matter? An experiment in
water resource planning. Water Resources Research 28, 17671779.
Hwang, C.L. Yoon, K.L., 198 1. Multiple Attribute Decision Making: Methods and Applications. Springer-Verlag,
New York.
Jelassi, M.T.J., Ozemoy, V.M., 1988. A framework for building
an expert system for MCDM models selection. In: Lockett,
A.G., Islei, G. (Eds.), Improving Decision Making in Organzations. Springer-Verlag,
New York, pp. 553-562.
Karni, R., Sanchez, P., Tummala, V., 1990. A comparative study
of multiattribute decision making methodologies.
Theory and
Decision 29, 203-222.
Kok, M., 1986. The interface with decision makers and some
experimental results in interactive multiple objective programming methods. European Journal of Operational Research 26,
96- 107.
Kok, M., Lootsma, F.A., 1985. Pairwise-comparison
methods in
multiple objective programming,
with applications in a longterm energy-planning
model. European Journal of Operational
Research 22, 44-55.
Lockett, G., Stratford, M., 1987. Ranking of research projects:
Experiments with two methods. Omega 15, 395-400.
Legrady, K., Lootsma, F.A., Meisner, J., Schellemans, F., 1984.
Multicriteria decision analysis to aid budget allocation, In:
Grower, M., Wierzbicki,
A.P., (Ed%), Interactive Decision
Analysis. Springer-Verlag,
pp. 164-174.
Lootsma, F.A., 1990. The French and American school in multicriteria decision analysis. Recherche Operationelle 24, 263285.
MacCrimmon,
K.R., 1973. An overview of multiple objective
decision making. In: Co&ran, J.L., Zeleny, M. (Eds.), Multiple Criteria Decision Making. University of South Carolina
Press, Columbia.
of Operutwnul
Research
529