You are on page 1of 15

Applied Soft Computing 10 (2010) 975989

Contents lists available at ScienceDirect


Applied Soft Computing
j our nal homepage: www. el sevi er . com/ l ocat e/ asoc
Analytic hierarchy prioritization process in the AHP application development:
A prioritization operator selection approach
Kevin Kam Fung Yuen

Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China
a r t i c l e i n f o
Article history:
Received 1 May 2008
Received in revised form 20 August 2009
Accepted 30 August 2009
Available online 25 October 2009
Keywords:
AHP
Prioritization methods
Prioritization operators
Prioritization method measurement
Multicriteria decision making
a b s t r a c t
Inthe analytic hierarchy process, prioritizationof the reciprocal matrixis a core issue toinuence the nal
decision choice. Various prioritization methods have been proposed, but none of prioritization methods
performs better than others in every inconsistent case. To address the prioritation operator selection
problem, this paper proposes the analytic hierarchyprioritizationprocess, whichis anobjective hierarchy
model (without using subjective pairwise comparisons) to approximate the real priority vectors with
selection of the most appropriate prioritization operator among the various prioritization candidates,
for a reciprocal matrix, and on the basis of a list of measurement criteria. Nine important prioritization
operators and seven measurement criteria are illustrated in AHPP. Two previous applications are revised
andillustrate the validity andusability of the proposedmodel. The results showthat the most appropriate
prioritization operator is dependent of the content of the reciprocal matrix and AHPP is an appropriate
method to address the prioritization problem to make better decisions.
2009 Elsevier B.V. All rights reserved.
1. Introduction
The pairwise comparison method is originated frompsycholog-
ical research [1,2]. Saaty developed the concept in a mathematical
way, and applied such a concept in the analytic hierarchy/network
process [37]. AHP has been widely studied and applied in mul-
ticriteria decision making (MCDM) domains [8,9]. However, there
are criticisms on this method [1015].
In AHP, verbal judgments are given for pairwise comparison
by decision makers. The reciprocal matrices are formed by trans-
forming the linguistic labels to numerical values. Next, the priority
vectors are generated from the reciprocal matrices by a prioriti-
zation operator. Finally, these priority vectors are aggregated as
a global priority vector in the synthesis stage. This induces three
fundamental problems: selection of numerical scales in stage one,
selection of prioritization operators (or methods) in stage two, and
selection of aggregation operators in stage three.
This research typically is interested in the selection of prior-
itization operators. The prioritization operator (PO) refers to the
algorithms of deriving a priority vector from the reciprocal matrix.
Various prioritization methods in the AHP models have been stud-
ied in the literature. Each method is claimed to overperform some
of the existing methods. Usually, the authors of POs claimed that
their proposed prioritization operator is superior by the way of

Tel.: +852 60112169.


E-mail address: kevinkf.yuen@gmail.com.
nding the better results (e.g. less value in the average of mean
absolute deviation) to their limited opponents (other prioritization
operators) in their proposed test scenarios on the basis of some cri-
teria, such as Euclidean distance, root mean squared error, mean
absolute deviation, or/and worst absolute deviation [e.g. 1619].
Such tests are similar to statistical methods. It is true that the sta-
tistical summarized performance of an operator A is better than
an operator B. This does not follow that A is better than B when
comparison is on a case-by-case basis. In addition, the fact that lim-
ited opponents, which are chosen for comparison, likely means less
objective.
Actually, the best prioritization operator relies on the content
of a pairwise matrix, and none of prioritization methods performs
better than others in every inconsistent case. Two applications
in this study and [20,21] also verify this issue. Thus, it is most
appropriate to propose a framework to select the most appropriate
prioritization operator for each reciprocal matrix among suf-
cient candidates with more objective measurement criteria and
methods.
The related research is in [21], which proposed a Multicrite-
ria Prioritization Synthesis (MPS). Seven prioritization operators
and two evaluation criteria (minimum violations and Euclidean
distance) were used in [21]. Srdjevic [21] also showed that the pri-
oritizationmethodwas dependent of reciprocal matrix, andnone of
prioritization methods was the best. The limitations of the research
are that the measurement criteria are insufcient and aggregation
of the results of the measurement criteria is linear and subjective
as weights of measurement criteria are not justied. The results
1568-4946/$ see front matter 2009 Elsevier B.V. All rights reserved.
doi:10.1016/j.asoc.2009.08.041
976 K.K.F. Yuen / Applied Soft Computing 10 (2010) 975989
Fig. 1. Compound analytic hierarchy process model.
of the appropriate prioritization operator may have bias. In this
study, two more prioritization operators are chosen as candidates,
ve more measurement criteria are chosen, and the algorithms of
AHPP are hierarchical and objective. The details of the algorithms
of AHPP can be found in Section 2.The details of the comparisons
can be found in Section 5.
As an accurate method to derive the priorities of the criteria is
critical to enable decision makers to make the correct choice, this
paper proposes an AHPP for evaluating the prioritization methods
and selecting the most appropriate one in the development of AHP
applications. This study selects nine important prioritization oper-
ators: Eigenvector [3], normalization of the row sum method [3],
normalization of reciprocals of column summethod [3], arithmetic
mean of normalized columns method [3] (which is the same as
additive normalization [21]), normalization of geometric means of
rows [3] (or logarithmleast squares [e.g. 16, 19, 2228], direct least
squares [30], weighted least squares [30], fuzzy programming [17],
and enhanced goal programming [19,29]). The details can be found
in Section 3.
The structure of this paper is organized as follows. The ideas
of Compound AHP, which comprises of AHPP, are presented in the
next section. In Section 3, several important prioritization methods
arereviewed. Themeasurement criteriaandmethods arepresented
in Section 4. In Section 5, two applications are selected for discus-
sion to illustrate the validity and usability of the proposed model.
Section 6 is the conclusion. The notation summary is in Appendix
C.
2. Compound AHP
The Compound AHP can be summarized as ve processes (D, E,
, S, M) where D is the denition process, E is the evaluation and
assessment process, is the analytic hierarchy prioritization pro-
cess, S is the synthesis process, and M is the measurement process.
If the AHP applies AHPP for its prioritization method, it is named as
CAHP (Fig. 1).
2.1. Denition process
The denition process D consists of two parts: the AHP problem
denition process and the AHPP denition process, i.e. D=(D
1
, D
2
).
In the AHP problemdenition process D
1
= (O, C,

T, ), a hierarchy
model is dened by an objective O, a set of criteria C={c
1
, c
2
, . . ., c
i
,
. . ., c
n
}, a set of alternatives

T = {t
1
, t
2
, . . . , t
j
, . . . , t
m
}, and a rating
scale schema = {
1
,
2
, . . . ,
i
, . . . ,
p
}. If the data type of the
rating scales is the fuzzy number, then the AHP model is extended
as fuzzy AHP problem. If the data type is the crisp number, then
the AHP model is the crisp AHP model or the AHP model. The crisp
AHP problem is the special case of the fuzzy AHP problems as the
crispAHPmodel does not needthe interval computing, but relies on
the modal value of the fuzzy number. This paper only discuss crisp
AHP model. For the typical AHP model, nine point rating scale is
applied, = {1/9, . . . , 1/2, 1, 2, . . . , 9} and the denition is shown
in Table 2.
Inthe AHPPdenitionprocess D
2
= (S,

S,

P), S ={s
1
, s
2
, . . ., s
q
, . . .,
s
Q
} is a set of measurement criteria,

P = {p
1
, p
2
, . . . , p
k
, . . . , p
K
} is
a set of prioritization operators, which is discussed in Section 3,
and

S
q


S = {

S
1
, . . . ,

S
Q
} is a measurement function to measure

P
under a measurement criterion s
q
, which is derived by

S
q
(

P). The
denitions for the measurement functions

S

q
s (or the set

S) can be
found in Section 4.
2.2. Evaluation and assessment process
In the evaluation and assessment process E =
((C),

(c
i
,

T)
_
n
i=1
), the decision makers or raters assess a
pairwise matrix of all criteria A
C
by assessment function (C),
and the set {A

T,i
} of the pairwise comparison matrices of all
alternatives for each criterion c
i
by the set of assessment functions,
(C,

T) =
_

(c
i
,

T)
_
n
i=1
_
n
i=1
= {(c
1
,

T), . . . , (c
n
,

T)}. A
C
is the
form:
A
C
= (C) =
c
1
c
2
.
.
.
c
n
c
1
c
2
. . . c
n

a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
a
n2
a
nn

(1)
A pairwise matrix A

T,i
of all alternatives

T for a criterion c
i
by
the pairwise assessment function, andA

T,i
A
T
= {A

T,1
, . . . , A

T,n
}.
A

T,i
is the form:
A
T,i
= (c
i
,

T) =
t
1
t
2
.
.
.
t
m
t
1
t
2
. . . t
m

a
11
a
12
. . . a
1m
a
21
a
22
. . . a
2m
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
a
m2
a
mm

, for all i (2)


To interpret the pairwise matrix, let a set of the real (ideal)
relative weights (or a priority vector) be W = {w
1
, . . . , w
n
}, and
the comparison score is a
ij

= w
i
/w
j
. The ideal pairwise matrix

A = [w
i
/w
j
] canberepresentedbyasubjectivejudgmental pairwise
K.K.F. Yuen / Applied Soft Computing 10 (2010) 975989 977
Table 1
Random consistency index (RI) ].
N 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
RI 0 0 .58 .90 1.12 1.24 1.32 1.41 1.45 1.49 1.51 1.48 1.56 1.57 1.59
matrix A=[a
ij
] formed as follows:

A = [ a
ij
] =

w
1
w
1
w
1
w
2
. . .
w
1
w
n
w
2
w
1
w
2
w
2
. . .
w
2
w
n
.
.
.
.
.
.
.
.
.
.
.
.
w
n
w
1
w
n
w
2

w
n
w
n

a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
a
n2
a
nn

= [a
ij
] = A (3)
where a
ij
= a
1
ji
, and a
ii
=1, i, j =1, . . ., n. A is the pair matrix from
expert judgment. A can be A
C
or A

T,i
.

A is an ideal consistent judg-
ment matrix generated from W, which is generated from A. W can
be W
C
or W
T,i
, which is generated from A
C
or A

T,i
.
2.3. Measurement process
Before the calculation of the priority vectors of the pairwise
matrices, it is necessary to evaluate the validity of the input data.
Measurement process M=(CR) is to determine a consistency ratio
CR, which is obtained by a consistency index CI and a randomindex
RI, which is an average random consistency index derived from a
sample of randomly generated reciprocal matrices using the nine
point scales (Table 2). CR has the form:
CR(CI, RI) =
CI
RI
(4)
where RI can be found in Table 1, and
CI =

max
n
n 1
(5)

max
is a principal eigenvalue of a pairwise matrix. Saaty [3] also
proved that
max
n.
max
canalso be derived by PerronFrobenius
theorem [7]:

max
=
1
n
n

i=1
w

i
w
i
, where [w

i
] = A W and w
i
W (6)
As the randomindex associated with Eigenvector method com-
prises many studies, this study applies Eigenvector method to
evaluate the consistency index of reciprocal matrix. To determine
the validity, if CR>0.1, the pairwise matrix is not consistent, then
the comparisons should be revised. Otherwise, the pairwise matrix
is accepted.
2.4. Analytic hierarchy prioritization process
AHPP returns the most appropriate priority vector by selecting
thebest prioritizationmethods. Ananalytic hierarchyprioritization
process is the form=(A, (D
2
, MP, Agg, EP),W). Aninconsistent pair-
wise matrix A is prioritized to the priority vector W, i.e. :AW.
consists of four core processes (D
2
, MP, Agg, EP) illustrated as
follows.
2.4.1. AHPP denition process (D
2
)
D
2
= (S,

S,

P) is introduced in Section 2.1.
2.4.2. Measurement and prioritization process (MP = (, V

P
))
V

P
= (v
s
1
, v
s
2
, . . . , v
sq
, . . . , v
s
Q
)
T
= (v
qk
)
QK
is a measurement pri-
ority matrix, which is written explicitly:
V

P
=
s
1
s
2
.
.
.
s
q
.
.
.
s
Q
p
1
p
2
. . . p
k
. . . p
K

v
11
v
12
. . . v
1k
. . . v
1K
v
21
v
22
. . . v
2k
. . . v
2K
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
v
q1
v
q2
v
qk
v
qK
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
v
Q1
v
Q2
v
Qk
v
QK

(7)
In the above matrix, V

P
= [(

S
q
(

P))]
T
= [(

S
{1,...Q}
(

P))]
T
=
[(

S
1
(

P)), . . . , (

S
Q
(

P))]
T
, and (

S
q
(

P)) V

P
. A measurement
priority vector, v
sq
= (

S
q
(

P)) = {v
q1
, . . . , v
qk
, . . . , v
qK
}, where

K
k=1
v
qk
= 1, q = 1, . . . Q, is produced by the transformation
function (

S
q
(

P)) which is to convert a set of measurement values


by the measurement function

S
q
(

P) into a measurement priority


vector v
sq
of measurement criterions
q
. InSection4, a normalization
of inversion function NI() (from Eq. (35)) is dened for (), i.e.
(

S
q
(

P)) = NI(

S
q
(

P)).
2.4.3. Aggregation process (Agg =())
In this process, a Maximum Individual Interest Aggregation
Operator () is proposed. It consists of two parts: column normal-
ization and ordering function, which are shown as follows.
Column normalization of V

P
is to generate the related weights
of the measurement criteria for each prioritization operator, and is
the form:
CN(V

P
) =
_
v

qk
=
v
qk

n
q=1
v
qk
| q = 1, 2, . . . , Q, k = 1, 2, . . . , K
_
=
V

P
_

Q
q=1
v
qk
, k = 1, 2, . . . , K
_ (8)
which is written explicitly,
V

= CN(V

P
) = (v

qk
)
QK
=
s
1
s
2
.
.
.
s
q
.
.
.
s
Q
p
1
p
2
. . . p
k
. . . p
K

11
v

12
. . . v

1k
. . . v

1K
v

21
v

22
. . . v

2k
. . . v

2K
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
v

q1
v

q2
v

qk
v

qK
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
v

Q1
v

Q2
v

Qk
v

QK

,
(9)
where v

qk
= v
qk
/

n
q=1
v
qk
, k = 1, 2, . . . , K.
From the matrix V

P
(Eq. (7)), a row vector is v
sq
=
{v
q1
, . . . , v
qk
, . . . , v
qK
}, q = 1, . . . , Q. Ordering function of v
sq
returns a set of rank positions of v
sq
in ascending order, and has
the form:ordering(v
sq
) = {I
q
(j)|j = 1, . . . , K}, q = 1, . . . , Qwhere
I
q
(j) =

K
k=1
r
j
(v
qk
), and
r
j
(v
qk
) =
_
1, v
qj
> v
qk
&j / = k
0, otherwise
(10)
978 K.K.F. Yuen / Applied Soft Computing 10 (2010) 975989
Table 2
The fundamental scale of absolute numbers ].
Intensity of Importance Denition Explanation
1 Equal importance
Two activities contribute equally to the objective
2 Weak or slight
3 Moderate importance
Experience and judgment slightly favor one activity over another
4 Moderate plus
5 Strong importance
Experience and judgment strongly favor one activity over another
6 Strong plus
7 Very strong or demonstrated
importance
An activity is favored very strongly over another; its dominance
demonstrated in practice
8 Very, very strong The evidence favoring one activity over another is of the highest
possible order of afrmation 9 Extreme importance
Reciprocals of above If activity i has one of the
above nonzero numbers
assigned to it when compared
with activity j, then j has the
reciprocal value when
compared with i
A reasonable assumption
Rationals Ratios arising from the scale If consistency were to be forced by obtaining n numerical values to
span the matrix
If the matrix V

P
is taken as parameter, the matrix I = ordering(V

P
),
which is written explicitly
I = ordering(V

P
) = (i

qk
)
QK
=
s
1
s
2
.
.
.
s
q
.
.
.
s
Q
p
1
p
2
. . . p
k
. . . p
K

11
i

12
. . . i

1k
. . . i

1K
i

21
i

22
. . . i

2k
. . . i

2K
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
i

q1
i

q2
i

qk
i

qK
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
i

Q1
i

Q2
i

Qk
i

QK

, i

qk
{1, . . . , K} (11)
Finally, the aggregation agg(V

, I) of V

and I is to take a column


mean of non-scalar product V

I, which is the form:


(V

, I) =
_
u
k
|u
k
=
1
Q

Q
q=1
i

qk
v

qk
, k = 1, . . . , K
_
= {u
1
, . . . u
k
, . . . , u
K
} = (12)
Thus, the Mean Individual Interest Aggregation Operator is
the form:
(V

P
) = (V

, I) = (CN(V

P
), ordering(V

P
))
= {u
1
, . . . u
k
, . . . , u
K
} = (13)
is a set of preference values of prioritization operators by the
Mean Individual Interest Aggregation Operator taking V

P
as a
parameter. The advantage of may help to address the problemto
set a weight for each measurement criterion. No universal weights
are given to the measurement criteria as the relative weights are
basedonCN(V

P
). The higher relative weight likely obtainthe higher
rank score in ordering(v
sq
). The aggregation of the weights and cor-
responding rank scores is based on each prioritization operators
optimum interest with considering other operations. Such allo-
cation of is more reasonable than those allocation of a simple
mean, max, or min operator as is tail-made for the prioritization
operators.
2.4.4. Exploitation process (EP = (p

, f (

P, ), W))
The prioritization selection function f (

P, ) returns the best


prioritization method p* in

P with respect to = (V

P
). The
best prioritization method is determined by the highest score
* =Max(), and its position is returned by the argument of the
maximum function argmax shown as follows:
p

= f (

P, ) = p

,
where = arg max
j {1,2,...,m}
({u
1
, . . . u
q
, . . . , u
Q
}), p


P (14)
An ideal priority vector W of a reciprocal matrix A derived by
the best prioritization method P* in AHPP model is the form:
W = (A) = P

(A) (15)
W can be a priority vector W
C
of criteria or a priority vector
W

T,i
of all alternatives

T under a criterion i, from the reciprocal
pairwise matrices (A
C
or A

T,i
) by AHPP function : A
C
W
C
or
: A

T,i
W

T,i
. In CAHP model, AHPP returns W
T
C
and W

T
, which
are as follows:
W
T
C
= [(A
C
)]
T
= P

(A
C
) = (
1
,
2
, . . . ,
i
, . . . ,
n
)
T
,
n

i=1

i
= 1
(16)
The upper subscript T is the transposition function.
W

T
= (W

T,i
)
T
= (W

T,1
, W

T,2
, . . . , W

T,i
, . . . , W

T,n
)
T
= [w
ij
]
nm
where W

T,i
= (A

T,i
) = {w
i1,
, . . . , w
im
} and
m

j=1
w
ij
= 1 , i = 1, . . . , n (17)
K.K.F. Yuen / Applied Soft Computing 10 (2010) 975989 979
W

T
is written explicitly as the form:
W

T
=
c
1
c
2
.
.
.
c
i
.
.
.
c
n
t
1
t
2
. . . t
j
. . . t
m

w
11
w
12
. . . w
1j
. . . w
1m
w
21
w
22
. . . w
2j
. . . w
2m
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
w
i1
w
i2
w
ij
w
im
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
w
n1
w
n2
w
nj
w
nm

(18)
W
T
C
and W

T
are used in synthesis process discussed in Section
2.5.
2.5. Synthesis process
Synthesis process is 3-tuple, i.e. S = ((W
C
, W

T
), t

, f ). Synthesis
process is to select the best alternative t* froma matrix W

T
= {W

T,i
}
of the priority vectors of all alternatives for all criteria, and a crite-
ria priority vector W
C
, by a selection function f (W
C
, W

T
), which is
illustrated as follows.
The results of W
C
andW

T
are aggregatedtoobtainglobal priority
of eachalternative

W by synthesis aggregationfunctionSagg which
is dened as the Scalar Dot Product sdp of W
T
C
and W

T
shown as
follows:

W = Sagg(W
C
, W

T
) = sdp(W
C
, W

T
) = W
T
C
W

T
= { w
1
, w
2
, . . . , w
j
, . . . , w
m
} (19)
Usually, the best alternative is determined by the highest score
w

= Max(

W), and its position is returned by the argument of the
maximum function argmax as follows:
t

= (

W) = t

, where = arg max


j {1,2,...,m}
({ w
1
, w
2
, . . . , w
j
, . . . , w
m
})
(20)
(In rare cases, if lowest score is applied, the argument of the
minimum function argmin is used.)
Finally, the selection function is the form:
t

= f (W
C
, W

T
) = (

W) = (sdp(W
C
, W

T
)) (21)
3. Prioritization operators
3.1. Eigenvector (EV)
Eigenvector operator for the intuitive justication is proposed
by [3]. EV is to derive the principal Eigenvector
max
of A as the
priority vector w by solving following the Eigen system.
Aw =
max
w, and
n

i=1
w
i
= 1 (22)
A is consistent if and only If
max
=n, and is not consistent if and
only if
max
>n where
max
n.
3.2. Normalization operators
Normalization operator was introduced in [3]. The following
methods are named according to their calculation steps.
3.2.1. Normalization of the row sum (NRS)
NRS is to sumthe elements in each rowand normalize by divid-
ing each sum by the total of all the sums, thus the results now add
up to unity. NRS has the form:
a

i
=
n

j=1
a
ij
i = 1, 2, . . . , n
w
i
=
a

n
i=1
a

i
i = 1, 2, . . . , n
(23)
3.2.2. Normalization of reciprocals of column sum (NRCS)
NRCS is to take the sum of the elements in each column, form
the reciprocals of these sums, and then normalize so that these
numbers add to unity, e.g. to divide each reciprocal by the sum of
the reciprocals. It is the form:
a

i
=
1

n
i=1
a
ij
j = 1, 2, . . . , n
w
i
=
a

n
i=1
a

i
i = 1, 2, . . . , n
(24)
3.2.3. Arithmetic mean of normalized columns (AMNC)
AMNCis alsonamedas additive normalizationmethodin[20] as
the proposed name is relatively clear about its calculation process.
Each element in A is divided by the sum of each column in A, and
then the mean of each row is taken as the priority w
i
. It has the
form:
a

ij
=
a
ij

n
i=1
a
ij
i, j = 1, 2, . . . , and
w
i
=
1
n
n

j=1
a

ij
i = 1, 2, . . . , n
(25)
The AHP applications applied this method due to the simplicity
of its calculation process.
3.2.4. Normalization of geometric means of rows (NGMR)
NGMR is to multiply the n elements in each row and take the
nth root, and then normalize so that these numbers add to unity. It
is the form:
w

i
=
n

j=1
a
1/n
ij
i = 1, 2, . . . , n
w
i
=
w

n
i=1
w

i
i = 1, 2, . . . , n
(26)
Although it is more complex than other three normalization
methods, it is recommended by some authors [e.g. 16, 18, 31] as
this method produces the same result as LLS.
3.3. Direct least squares/weighted least squares (DLS/WLS)
This method is used to minimize the sumof errors of the differ-
ences between the judgments and their derived values. The direct
least squares, which was proposed in [30], is the form:
Min
n

i=1
n

j=1
_
a
ij

w
i
w
j
_2
Subject to
n

i=1
w
i
= 1, w
i
> 0, i = 1, 2, . . . , n
(27)
The above non-linear optimization problem has no special
tractable formor closed formand is very difcult to be solved [30].
However, there is no clear evidence (e.g. no formal mathematical
980 K.K.F. Yuen / Applied Soft Computing 10 (2010) 975989
proof can be found in the literature) to show it has multiple solu-
tions although [26] claimed that it may have multiple solutions,
and [17,20] indicated that it generally has multiple solutions
using the incorrect citations of [26,30]. At least, this research pro-
duces the same LLS results of the two examples (Section 5) as [21]
does.
For efcient computation with closed form, [30] modied the
objective function and proposed the weighted least squares (WLS)
in the form:
Min
n

i=1
n

j=1
(w
i
a
ij
w
j
)
2
Subject to
n

i=1
w
i
= 1, w
i
> 0, i = 1, 2, . . . , n
(28)
Although this method provides the closed form for the answer,
which is shown in [20], the reliability is likely less than DLSM(refer
to Example 2).
3.4. Logarithm least squares (LLS)
LLS has a long history and has been intensively studied by many
authors [e.g. 16, 19, 2228]. The LLS is of the form:
Min
n

i=1
n

j>i
(lna
ij
(lnw

i
lnw

j
))
2
Subject to
n

i=1
w

i
= 1, w

i
> 0, i = 1, 2, . . . , n
(29)
The nal result (w
i
) is derived fromnormalization of (w

i
). Craw-
ford and Williams [16] indicated that the solution is unique, and is
equivalent to NGMR, which is preferable due to its simplicity.
3.5. Fuzzy programming (FP)
The FP is proposed by Mikhailov [17], which has the form:
max
Subject to d
+
j
+R
j
W
T
d
+
j
,
d

j
R
j
W
T
d

j
, j = 1, 2, . . . , m, 1 0
n

i=1
w
i
= 1, w
i
> 0, j = 1, 2, . . . , n
(30)
R
j
R
mn
= {a
ij
} is the rowvector. The values of the left and right
toleranceparameters d

j
andd
+
j
represent theadmissibleinterval of
approximatesatisfactionof thecrispequalityR
j
W
T
=0. Themeasure
of intersection of is a natural consistency index of the FP. Its value
however depends on the tolerance parameters. For the practical
implementation of the FP it is reasonable all these parameters to
be set equal. Limitation of this method is that parameters d

j
and
d
+
j
are undermined in [17]. This leads to innite candidate values
for the parameters. Mikhailov [17] set d

j
= d
+
j
= 1 in his example.
3.6. Enhanced goal programming (EGP)
Bryson [29] proposed goal programming operator (GP), which
uses relative deviations
+
ij
/

ij
tomeasure the relationshipbetween
w
i
/w
j
and a
ij
. The relationship has the form:
_

+
ij

ij
__
w
i
w
j
_
= a
ij
, where (
+
ij
1&

ij
= 1) or (

ij
1&

ij
= 0)
(31)
In Brysons method, the aim is to minimize

j>1
(
+
ij

ij
). To
solve Eq. (31), the non-linear programming problem is translated
into the linear goal programming problem with the form:
Min ln =
n

i=1
n

j>i
(ln
+
ij
+ln

ij
)
Subject to lnw
i
lnw
j
+ln
+
ij
ln

ij
= lna
ij
, (i, j) IJ
(32)
where IJ = {(i, j) : 1 i < j n}; ln
+
ij
and ln

ij
are non-negative.
Ideally the objective value should be 0 when ln
+
ij
= ln

ij
= 0,
i.e.
+
ij
=

ij
= 1. The computer tries to minimize the value as low
as possible for the solution by many loops. In the simulation of
this research, GP does not provide the unique results of the prior-
ities, even if the same objective value is achieved by the functions
FindMinimum[] and NMinimize[] in Mathematica. Also Lingo and
Mathematicaprovidedifferent results whenobjectivevalues arethe
same. Lin[19] illustratedanexampletoaddaw
i
> 0as aconstraint;
the priority is different but the objective values are the same. These
facts can be concluded that GP leads to various priority vectors for
the same objective value. (The mathematical proof is left for readers
as it is beyond the research purpose.)
To address this issue, one approach is to modify GP as LLS. If
the objective function is modied as

n
i=1

n
j>i
(ln
+
ij
+ln

ij
)
2
, it
will become another form of the LLS model, which is equivalent to
Eq. (29). Thus, there are two forms of LLS which provide the same
result as NGMR:
Min ln =
n

i=1
n

j>i
(ln
+
ij
+ln

ij
)
2
Subject to lnw
i
lnw
j
+ln
+
ij
ln

ij
= lna
ij
, (i, j) IJ
(33)
where IJ = {(i, j) : 1 i < j n}; ln
+
ij
0 and ln

ij
0. The objec-
tive function is also equivalent to

n
i=1

n
j>i
((ln
+
ij
)
2
+(ln

ij
)
2
).
As NGMR is relatively easy for calculation, it is more preferable.
However, there exists a tradeoff. The GP, which gives various solu-
tions, performs better than the LLS when outliers exist but suffers
from the problem of alternative optimal solutions while the LLS
gives a unique solution but is sensitive to outliers [18,19].
Another approach is to use the enhanced goal programming
model [19], which is the combination of GP and LLS, and has the
form:
Min ln+
Subject to lnw
i
lnw
j
+ln
+
ij
ln

ij
= lna
ij
, (i, j) IJ
ln =
n

i=1
n

j>i
(ln
+
ij
+ln

ij
)
=
n

i=1
n

j>i
((ln
+
ij
)
2
+(ln

ij
)
2
)
(34)
where IJ = {(i, j) : 1 i < j n}, ln
+
ij
0, ln

ij
0 and is a suf-
cient small positive number. The term sufcient small means that
any increase of will cause ln to lose its optimality. In his paper,
is set to 10
10
as an example, which is approximate to 0. When
ln reaches its optimum, a tradeoff rate exists between ln and
. The decrease of leads to the increase of ln. The effect of
is to depress ln to increase. When is sufcient small, any sacri-
K.K.F. Yuen / Applied Soft Computing 10 (2010) 975989 981
ce of lnr reducing would be fruitless. Thus, the model forces
the solution to minimize lnbefore is minimized. However, this
optimization method induces more computational effort than LLS
and GP.
4. Measurement priority vectors
If the matrix is consistent, any above prioritization methods can
produce real priority vectors. This usually happens in the matrix
of size 33 and 44. A matrix of size of 55 or above is likely
an inconsistent matrix. For the inconsistent matrix, different pri-
oritization operators produce different results, which possibly lead
to different priority orders. Criteria to measure the tness of the
prioritization operators are needed. Fitness presents the measure-
ment of how t a priority vector of a prioritization operator can
represent a reciprocal matrix. In Section 2, a measurement priority
vector of a measurement criterion s
q
of prioritization operator

P is
the form v
sq
= (S
q
(

P)).

S
q
(

P) is a measurement method or func-


tion of

P for s
q
. This study denes seven measurement methods on
the basis of variances. Less variance leads to more tness of a pri-
oritization operator. The normalization of inversion NI() is dened
for the transformation function () which is to convert a set of
measurement values by

S
q
(

P) into a vector of the measurement


priorities v
sq
. Let a set of variances (or measurement values) be
= {
1
, . . . ,
k
, . . . ,
K
}, K is the cardinal number of the set of pri-
oritization operators, thus inversion of is
1
=
_

1
1
, . . . ,
1
K
_
.
The normalization of inversion (NI()) is of the form:
NI() =
_

1
1

z
i=1

1
, . . . ,

1
k

z
i=1

1
, . . . ,

1
z

z
i=1

1
_
(35)
can be determined by

S
q
(

P), i.e. =

S
q
(

P). Seven measurement


methods

S
1,...,7
(

P) propagatingtomeasurement priorityvectors are


introduced as follows.
4.1. Normalization of inversion of mean absolute variance
(NI(MAV))
The mean absolute variance is of the form:
MAV(A, W) =
1
n n
n

i=1
n

j=1

a
ij

w
i
w
j

(36)
where A is a pairwise matrix (a
ij
), Wis a priorities vector of a prior-
itization operator, and w
i
, w
j
W {W
1
, . . . , W
K
} = {W}. Then the
priorities vector of the set of the prioritization candidates is of the
form:
MAV(A, {W}) = {MAV
1
, . . . , MAV
K
}
Less variance reects better tness. Higher inversionof variance
leads to higher tness of a prioritization operator. Then, inversion
of MAV is of the form:
MAV
1
(A, {W}) = {MAV
1
1
, . . . , MAV
K
1
}
Finally the above result is normalized, and then NI(MAV) is of
the form:
NI(MAV(A, {W}))=
_
MAV
1
1

MAV
1
(A, {W})
, . . . ,
MAV
K
1

MAV
1
(A, {W})
_
(37)
4.2. Normalization of inversion of root mean square variance
(NI(RMS))
The root mean square variance is of the form:
RMSV(A, W) =

_
1
n n
n

i=1
n

j=1
_
a
ij

w
i
w
j
_
2
(38)
The Euclidean distance is of the form:
ED(A, W) =

_
n

i=1
n

j=1
_
a
ij

w
i
w
j
_
2
(39)
Similar to the calculation approach of NI(MAV), then NI(RMSV)
or NI(ED) is of the form:
NI(RMSV(A, {W}))
=
_
RMSV
1
1

RMSV
1
(A, {W})
, . . . ,
RMSV
K
1

RMSV
1
(A, {W})
_
(40)
Although the formof RMSV is different fromED, the nal results
are the same after NI is taken.
4.3. Normalization of inversion of worst absolute variance
(NI(WAV))
The worst absolute variance is the form:
WAV(A, W) = Max
i,j
_

a
ij

w
i
w
j

_
(41)
And NI(WAV) is the form:
NI(WAV(A, {W}))
=
_
WAV
1
1

WAV
1
(A, {W})
, . . . ,
WAV
K
1

WAV
1
(A, {W})
_
(42)
4.4. Normalization of inversion of consistency variance (NI(CV))
Consistence index has two purposes in this paper. One is used
for testing the validity of user inputs for the pairwise matrices. This
belongs to the topic of consistence ratio. The second one is that
CI is as a measurement criterion to measure the relative tness
among candidates. To distinguish these two purposes, consistency
variance is the name used for the second purpose.
The CI (Eq. (6)) proposed by Saaty for the EVMcan be expressed
as the average of the differences between the errors and the unity,
and is of the form [31].
CV = CI =
1
n(n 1)
n

i / = j
_
a
ij
w
j
w
i
1
_
(43)
And NI(CV(A,{W})) is the form:
NI(CV(A, {W})) =
_
CV
1
1

CI
1
(A, {W})
, . . . ,
CV
K
1

CI
1
(A, {W})
_
(44)
4.5. Normalization of inversion of geometric consistency variance
(NI(GCV))
Analogous to Eq. (43), [31] proposed geometric consistency
index, which can be seen as an average of the square difference
982 K.K.F. Yuen / Applied Soft Computing 10 (2010) 975989
between the logarithmof the errors and the logarithmof unity, i.e.
GCV = GCI =
1
n(n 1)

i<j
_
Log
_
a
ij
w
j
w
i
_
Log(1)
_
2
=
1
n(n 1)

i<j
Log
2
_
a
ij
w
j
w
i
_
(45)
And NI(GCV(A,{W})) is the form:
NI(GCV(A, {W})) =
_
GCV
1
1

GCI
1
(A, {W})
, . . . ,
GCV
K
1

GCI
1
(A, {W})
_
(46)
4.6. Normalization of inversion of mean minimum violation
(NI(MMV))
[26] proposed minimum violation, which was also used by
[20,21]. The form is shown as follows:
MV(A, W) =

j
I
ij
I
ij
=

1,
0.5,
0.5,
0,
w
i
> w
j
&a
ji
> 1
w
i
= w
j
&a
ji
/ = 1
w
i
/ = w
j
&a
ji
= 1
otherwise
There is a critical error of the above form. I
ij
must be equal to 1 in
the condition (w
i
< w
j
& a
ji
< 1). Also as the value of MV depends
on the size (n
2
) of the matrix (usually the larger size of the matrix
leads to the higher value of MV), the mean value of MV(MMV) is
a more appropriate method to measure the prioritization method
in the reciprocal matrix. Thus, the revised MMV has the following
form:
MMV(A, W) =
1
n
2

1 +

j
I
ij

I
ij
=
_
1, (w
i
> w
j
& a
ji
> 1) or (w
i
< w
j
& a
ji
< 1)
0.5, (w
i
= w
j
& a
ji
/ = 1) or (w
i
/ = w
j
& a
ji
= 1)
0, otherwise
(47)
An ideal prioritization method means MV=0. The zero value
cannot be used by NI(). To use NI() properly, 1 is added into sum-
mations of I
ij
which is taken in Eq. (47). NI(MMV(A,{W})) is of the
form:
NI(MWV(A, {W})) =
_
(MWV
1
)
1

z
i=1
(MWV
i
)
1
, . . . ,
(MWV
K
)
1

z
i=1
(MWV
i
)
1
_
(48)
4.7. Normalization of inversion of weighted distance variance
(NI(WDV))
This study proposes the weighted distance variance method,
which is expressed as
WDV =
1
n

j
Y
ij
where
Y
ij
=

a
ij

w
i
w
j

, w
i
w
j
& a
ij
1
or w
i
w
j
& a
ij
1

a
ij

w
i
w
j

, w
i
= w
j
& a
ij
/ = 1, 1 =
1

2

3
or w
i
/ = w
j
&a
ij
= 1

a
ij

w
i
w
j

, otherwise
(49)
WAV is the special case of WDV if
1
=
2
=
3
=1. By default,

1
=1,
2
=1.5,
3
=12 are dened. And NI(MDV(A,{W})) is of the
form:
NI(WDV(A, {W}))
=
_
WDV
1
1

WDV
1
(A, {W})
, . . . ,
WDV
K
1

WDV
1
(A, {W})
_
(50)
Howthe most appropriate prioritization operator is selected on
the basis of above measurement priority vectors is discussed in the
next section.
5. Applications
Two decision problems, which are also discussed in [21], are
selected to illustrate the AHPP concept: (1) allocating the reservoir
storage to multiple purposes, which is taken from [32], (2) choos-
ing the high school, which is taken from [3, pp. 2628]. These two
cases shown in [21] are worthy to be reused since this study also
compares the results between the Multicriteria Prioritization Syn-
thesis (MPS) [21] and the proposed AHPP. Example 1 is illustrated
for details of the calculation approach. And Example 2 is illustrated
for theessentials of theapproach. Various prevailingsoftwarepack-
ages such as Excel, Mathlab, and Lindo can conveniently compute
the models. This studyuses Mathematicatoperformthe calculation.
For the comparison with [21], six prioritization operators are
chosen in [21]. Only Euclidean distance and minimum violations
with the relative weights of 0.8 and 0.2 respectively are considered
in [21]. The calculation of the measurement method is rather rough
and subjective. In this article, seven measurement criteria and nine
prioritization operators are used to forman objective AHPP model.
The calculation of AHPP is relatively comprehensive and objective
as this is based on the measure priority matrix V

P
. The details are
discussed as follows.
5.1. Example 1
5.1.1. Denition process
The AHP problemis to allocate the surface water reservoir stor-
age tomultiple uses. Aglobal economical goal is denedas the most
protableuseof reservoir. Sixpurposes,

T = {t
1
, t
2
, . . . , t
6
}, arecon-
sidered as decision alternatives. Five economical criteria, C={c
1
,
c
2
, . . ., t
5
} of different metrics are used as alternatives. Details are
shown in Appendix A.
IntheAHPPdenitionprocess, aset of sevenmeasurement crite-
ria S ={s
1
, s
2
, . . ., s
7
} is measurement by a set of sevenmeasurement
functions,

S = {

S
1
, . . . ,

S
7
}, which is discussed in Section 6, and a set
of nine prioritizationoperators

P = {p
1
, p
2
, . . . , p
9
} are usedtoform
an AHPP model. The function AMNC used in this paper is the same
as AN using in [21]. The AHPP structure is shown in Fig. 2.
5.1.2. Evaluation and assessment, and measurement process
The details of the pairwise matrices and consistence ratio are
shown in Appendix A. There are one 55 matrix A
C
at the second
K.K.F. Yuen / Applied Soft Computing 10 (2010) 975989 983
Fig. 2. The structure of AHPP.
level, and a set of ve 66 matrices A

T
at the third level. They are
denoted as A = {A
C
, A

T
} = {A
1
, A
2
, . . . , A
6
}.
5.1.3. Analytic hierarchy prioritization process
In Table 3, it can be concluded that different prioritization
operators produce different values of priorities. Whist many
studies argue which one is the best, AHPP addresses this prob-
lem and is used for selecting the best prioritization operator
among several candidates by some measurement criteria for an
individual reciprocal matrix. The detailed steps are shown as fol-
lows.
In the denition process, the AHPP structure is shown in Fig. 2.
Table 3
Priority vectors and synthesis results with nine prioritization operators (Example 1).
C
1
C
2
C
3
C
4
C5

W C
1
C
2
C
3
C
4
C5

W
P
1
: EV P
2
: NRS
C 0.358 0.306 0.041 0.171 0.123 0.288 0.321 0.040 0.192 0.159
t
1
0.440 0.365 0.409 0.416 0.046 0.363 6 0.389 0.344 0.364 0.359 0.048 0.314 6
t
2
0.077 0.110 0.180 0.150 0.209 0.120 2 0.084 0.103 0.179 0.168 0.227 0.133 2
t
3
0.286 0.078 0.032 0.036 0.187 0.157 5 0.322 0.076 0.031 0.032 0.168 0.151 4
t
4
0.065 0.040 0.106 0.040 0.352 0.090 1 0.069 0.039 0.124 0.048 0.335 0.100 1
t5 0.078 0.211 0.153 0.125 0.162 0.140 4 0.086 0.227 0.192 0.163 0.176 0.164 5
t
6
0.054 0.197 0.120 0.233 0.043 0.130 3 0.050 0.211 0.110 0.230 0.047 0.138 3
P
3
: NRCS P
4
: AMNC
C 0.418 0.278 0.048 0.140 0.115 0.352 0.300 0.043 0.172 0.133
t
1
0.514 0.406 0.479 0.478 0.061 0.425 6 0.432 0.364 0.403 0.413 0.048 0.356 6
t
2
0.066 0.104 0.152 0.130 0.155 0.100 2 0.082 0.108 0.178 0.151 0.223 0.125 2
t
3
0.216 0.063 0.039 0.043 0.223 0.141 5 0.281 0.079 0.034 0.036 0.192 0.156 5
t
4
0.060 0.043 0.091 0.039 0.329 0.085 1 0.066 0.040 0.107 0.043 0.325 0.091 1
t5 0.075 0.194 0.131 0.110 0.177 0.127 4 0.081 0.206 0.159 0.129 0.166 0.142 4
t
6
0.070 0.190 0.108 0.200 0.055 0.122 3 0.057 0.203 0.119 0.228 0.045 0.131 3
P5: NGMR/LLS P
6
: DLS
C 0.356 0.313 0.042 0.166 0.122 0.276 0.345 0.047 0.166 0.166
t
1
0.448 0.384 0.432 0.423 0.053 0.375 6 0.384 0.335 0.353 0.346 0.053 0.305 6
t
2
0.073 0.108 0.167 0.147 0.212 0.117 2 0.063 0.098 0.182 0.17 0.204 0.122 2
t
3
0.282 0.071 0.034 0.036 0.193 0.154 5 0.348 0.06 0.041 0.042 0.169 0.154 4
t
4
0.063 0.041 0.106 0.039 0.326 0.086 1 0.057 0.043 0.087 0.04 0.334 0.097 1
t5 0.079 0.204 0.154 0.125 0.167 0.140 4 0.073 0.239 0.23 0.173 0.192 0.174 5
t
6
0.056 0.192 0.108 0.231 0.049 0.129 3 0.075 0.224 0.107 0.229 0.047 0.149 3
P7: WLS P
8
: FP
C 0.411 0.291 0.047 0.140 0.111 0.391 0.283 0.065 0.152 0.109
t
1
0.498 0.394 0.469 0.464 0.057 0.412 6 0.453 0.389 0.442 0.423 0.17 0.399 6
t
2
0.060 0.099 0.160 0.131 0.126 0.093 2 0.121 0.117 0.184 0.165 0.188 0.138 5
t
3
0.233 0.056 0.040 0.046 0.220 0.145 5 0.201 0.078 0.063 0.059 0.16 0.131 3
t
4
0.057 0.043 0.087 0.039 0.369 0.087 1 0.05 0.067 0.095 0.045 0.279 0.082 1
t5 0.075 0.205 0.132 0.117 0.177 0.133 4 0.086 0.194 0.104 0.132 0.146 0.132 4
t
6
0.077 0.203 0.112 0.202 0.051 0.130 3 0.088 0.156 0.112 0.176 0.056 0.119 2
P
9
: EGP AHPP
C 0.356 0.313 0.042 0.166 0.122 0.352 0.300 0.043 0.172 0.133
t
1
0.426 0.397 0.465 0.436 0.053 0.375 6 0.432 0.384 0.432 0.413 0.053 0.364 6
t
2
0.085 0.099 0.155 0.145 0.263 0.124 2 0.082 0.108 0.167 0.151 0.212 0.123 2
t
3
0.273 0.066 0.031 0.045 0.158 0.146 5 0.281 0.071 0.034 0.036 0.193 0.154 5
t
4
0.063 0.04 0.085 0.037 0.316 0.083 1 0.066 0.041 0.106 0.043 0.326 0.091 1
t5 0.091 0.199 0.171 0.112 0.158 0.140 4 0.081 0.204 0.154 0.129 0.167 0.141 4
t
6
0.063 0.199 0.093 0.224 0.053 0.132 3 0.057 0.192 0.108 0.228 0.049 0.128 3
984 K.K.F. Yuen / Applied Soft Computing 10 (2010) 975989
Table 4
Measurement variances (

S(

P)

s) of nine prioritization operators for six pairwise matrices (Example 1).


P
1
P
2
P
3
P
4
P5 P
6
P7 P
8
P
9
P
1
P
2
P
3
P
4
P5 P
6
P7 P
8
P
9
A
1
A
2
s
1
0.551 0.566 0.628 0.517 0.536 0.482 0.605 0.652 0.536 0.624 0.654 0.698 0.595 0.624 0.547 0.698 0.750 0.554
s
2
0.992 0.854 1.113 0.917 0.963 0.733 1.102 1.089 0.962 1.043 1.112 1.216 1.035 1.020 0.789 1.156 1.338 1.026
s
3
3.644 2.251 3.684 3.125 3.407 1.997 3.759 3.333 3.407 3.272 3.178 3.714 3.588 3.117 1.896 3.305 5.333 3.794
s
4
0.104 0.141 0.128 0.107 0.104 0.153 0.124 0.164 0.104 0.112 0.130 0.143 0.115 0.111 0.151 0.153 0.257 0.121
s5 0.101 0.135 0.123 0.103 0.101 0.143 0.119 0.155 0.101 0.108 0.124 0.137 0.110 0.107 0.143 0.146 0.225 0.116
s
6
0.04 0.12 0.04 0.04 0.04 0.12 0.04 0.04 0.04 0.167 0.167 0.222 0.111 0.167 0.278 0.278 0.167 0.139
s7 0.551 0.635 0.628 0.517 0.536 0.56 0.605 0.652 0.536 0.718 0.75 0.839 0.649 0.72 0.738 0.896 0.889 0.659
A
3
A
4
s
1
0.491 0.500 0.464 0.477 0.455 0.445 0.451 0.604 0.425 0.638 0.599 0.645 0.611 0.578 0.574 0.610 0.686 0.568
s
2
0.893 0.846 0.899 0.884 0.881 0.764 0.867 1.151 0.904 1.230 1.110 1.231 1.133 1.221 0.879 1.186 1.220 1.482
s
3
3.295 3.000 2.925 3.385 3.146 2.711 2.653 4.667 3.000 5.786 4.600 5.128 4.963 5.823 2.210 4.768 5.333 8.000
s
4
0.095 0.102 0.099 0.096 0.093 0.121 0.112 0.150 0.097 0.142 0.163 0.157 0.141 0.138 0.203 0.158 0.263 0.159
s5 0.091 0.097 0.093 0.092 0.089 0.110 0.102 0.142 0.091 0.134 0.150 0.148 0.132 0.129 0.184 0.149 0.239 0.145
s
6
0.111 0.111 0.111 0.111 0.111 0.111 0.111 0.111 0.111 0.139 0.139 0.139 0.139 0.139 0.194 0.139 0.194 0.194
s7 0.491 0.500 0.464 0.477 0.455 0.445 0.451 0.604 0.425 0.752 0.747 0.757 0.723 0.688 0.787 0.728 0.853 0.765
A5 A
6
s
1
0.605 0.693 0.643 0.599 0.601 0.537 0.608 0.694 0.519 0.525 0.442 0.555 0.464 0.435 0.424 0.572 0.926 0.389
s
2
1.075 1.088 1.264 1.018 1.104 0.817 1.176 1.106 1.104 0.843 0.775 0.885 0.798 0.774 0.747 0.888 1.457 0.831
s
3
3.432 2.637 5.208 2.997 3.940 1.994 4.829 3.075 4.662 3.315 3.521 2.877 3.541 3.464 3.367 2.763 4.361 3.800
s
4
0.083 0.117 0.103 0.084 0.082 0.115 0.104 0.146 0.095 0.122 0.126 0.149 0.121 0.117 0.126 0.186 0.439 0.144
s5 0.081 0.114 0.099 0.082 0.080 0.111 0.100 0.139 0.092 0.116 0.117 0.141 0.113 0.110 0.118 0.172 0.390 0.129
s
6
0.028 0.028 0.083 0.028 0.028 0.139 0.083 0.083 0.083 0.111 0.167 0.222 0.111 0.111 0.167 0.222 0.222 0.111
s7 0.605 0.693 0.690 0.599 0.601 0.623 0.659 0.751 0.571 0.525 0.486 0.666 0.464 0.435 0.472 0.707 1.086 0.389
Table 5
The measurement priority matrices (V
P
= NI(

S(

P))) for six pairwise matrices (Example 1).


P
1
P
2
P
3
P
4
P5 P
6
P7 P
8
P
9
P
1
P
2
P
3
P
4
P5 P
6
P7 P
8
P
9
A
1
A
2
s
1
0.113 0.110 0.099 0.120 0.116 0.129 0.103 0.095 0.116 0.112 0.107 0.101 0.118 0.112 0.128 0.101 0.094 0.127
s
2
0.107 0.124 0.095 0.116 0.110 0.145 0.096 0.097 0.110 0.113 0.106 0.097 0.114 0.116 0.149 0.102 0.088 0.115
s
3
0.093 0.150 0.092 0.108 0.099 0.169 0.090 0.101 0.099 0.110 0.114 0.097 0.101 0.116 0.190 0.109 0.068 0.095
s
4
0.130 0.096 0.106 0.127 0.130 0.089 0.110 0.083 0.130 0.134 0.116 0.105 0.131 0.135 0.100 0.098 0.058 0.124
s5 0.128 0.096 0.106 0.126 0.129 0.091 0.110 0.084 0.129 0.132 0.115 0.104 0.130 0.134 0.100 0.098 0.064 0.124
s
6
0.130 0.043 0.130 0.130 0.130 0.043 0.130 0.130 0.130 0.116 0.116 0.087 0.173 0.116 0.069 0.069 0.116 0.139
s7 0.116 0.101 0.102 0.124 0.119 0.114 0.106 0.098 0.119 0.116 0.112 0.100 0.129 0.116 0.113 0.093 0.094 0.127
A
3
A
4
s
1
0.107 0.106 0.114 0.111 0.116 0.119 0.117 0.087 0.124 0.106 0.113 0.105 0.111 0.117 0.118 0.111 0.099 0.119
s
2
0.111 0.117 0.110 0.112 0.112 0.129 0.114 0.086 0.109 0.106 0.117 0.105 0.115 0.106 0.148 0.109 0.106 0.088
s
3
0.105 0.116 0.118 0.102 0.110 0.128 0.131 0.074 0.116 0.089 0.112 0.101 0.104 0.089 0.234 0.108 0.097 0.065
s
4
0.122 0.114 0.118 0.121 0.125 0.097 0.104 0.078 0.120 0.128 0.111 0.116 0.128 0.131 0.089 0.114 0.069 0.114
s5 0.120 0.113 0.118 0.120 0.124 0.100 0.107 0.077 0.120 0.126 0.112 0.114 0.127 0.130 0.091 0.113 0.070 0.116
s
6
0.111 0.111 0.111 0.111 0.111 0.111 0.111 0.111 0.111 0.123 0.123 0.123 0.123 0.123 0.088 0.123 0.088 0.088
s7 0.107 0.106 0.114 0.111 0.116 0.119 0.117 0.087 0.124 0.111 0.112 0.111 0.116 0.122 0.106 0.115 0.098 0.109
A5 A
6
s
1
0.111 0.097 0.105 0.112 0.112 0.125 0.111 0.097 0.130 0.105 0.124 0.099 0.118 0.126 0.130 0.096 0.059 0.141
s
2
0.111 0.109 0.094 0.117 0.108 0.145 0.101 0.107 0.108 0.113 0.123 0.108 0.119 0.123 0.127 0.107 0.065 0.115
s
3
0.108 0.141 0.071 0.124 0.094 0.186 0.077 0.121 0.079 0.114 0.107 0.131 0.106 0.109 0.112 0.136 0.086 0.099
s
4
0.135 0.095 0.108 0.131 0.135 0.096 0.107 0.076 0.117 0.132 0.128 0.108 0.133 0.137 0.128 0.087 0.037 0.112
s5 0.134 0.095 0.108 0.131 0.134 0.097 0.107 0.077 0.117 0.129 0.128 0.106 0.133 0.136 0.127 0.087 0.038 0.116
s
6
0.181 0.181 0.060 0.181 0.181 0.036 0.060 0.060 0.060 0.146 0.098 0.073 0.146 0.146 0.098 0.073 0.073 0.146
s7 0.117 0.102 0.103 0.119 0.118 0.114 0.108 0.095 0.124 0.112 0.121 0.089 0.127 0.136 0.125 0.084 0.054 0.152
Table 6
Preference values of nine prioritization operators for A
1
(Example 1).
A
1
V

= CN(V
P
) I = ordering(V
P
)
P
1
P
2
P
3
P
4
P5 P
6
P7 P
8
P
9
P
1
P
2
P
3
P
4
P5 P
6
P7 P
8
P
9
s
1
0.138 0.152 0.136 0.141 0.139 0.165 0.138 0.138 0.139 5 4 2 8 7 9 3 1 6
s
2
0.131 0.172 0.130 0.136 0.132 0.185 0.129 0.141 0.132 4 8 1 7 5 9 2 3 6
s
3
0.113 0.208 0.126 0.127 0.119 0.217 0.121 0.147 0.119 3 8 2 7 4 9 1 6 5
s
4
0.159 0.133 0.145 0.149 0.156 0.114 0.147 0.120 0.156 7 3 4 6 9 2 5 1 8
s5 0.157 0.134 0.145 0.148 0.155 0.117 0.147 0.122 0.155 7 3 4 6 9 2 5 1 8
s
6
0.160 0.060 0.179 0.153 0.156 0.056 0.175 0.189 0.156 3 1 3 3 3 1 3 3 3
s7 0.142 0.140 0.140 0.145 0.143 0.147 0.142 0.142 0.143 6 2 3 9 8 5 4 1 7
0.728 0.685 0.395 0.932 0.931 0.908 0.48 0.342 0.883 =4, p*(A
1
) =p
4
(A
1
)
K.K.F. Yuen / Applied Soft Computing 10 (2010) 975989 985
Table 7
Preference values of nine prioritization operators for A
2
(Example 1).
A
2
V

= CN(V
P
) I = ordering(V
P
)
P
1
P
2
P
3
P
4
P5 P
6
P7 P
8
P
9
P
1
P
2
P
3
P
4
P5 P
6
P7 P
8
P
9
s
1
0.135 0.137 0.146 0.132 0.133 0.151 0.150 0.161 0.149 6 4 2 7 5 9 3 1 8
s
2
0.136 0.135 0.140 0.127 0.137 0.176 0.152 0.152 0.135 5 4 2 6 8 9 3 1 7
s
3
0.132 0.145 0.141 0.112 0.137 0.224 0.163 0.116 0.112 6 7 3 4 8 9 5 1 2
s
4
0.160 0.147 0.152 0.146 0.160 0.117 0.146 0.100 0.145 8 5 4 7 9 3 2 1 6
s5 0.158 0.147 0.151 0.145 0.158 0.118 0.146 0.110 0.146 8 5 4 7 9 3 2 1 6
s
6
0.139 0.147 0.126 0.194 0.137 0.082 0.103 0.199 0.163 4 4 3 9 4 1 1 4 8
s7 0.140 0.142 0.144 0.144 0.137 0.133 0.139 0.162 0.149 7 4 3 9 6 5 1 2 8
0.909 0.675 0.431 1.03 1.014 0.915 0.364 0.251 0.944 =4, p*(A
2
) =p
4
(A
2
)
Table 8
Preference values of nine prioritization operators for A
3
(Example 1).
A
3
V

= CN(V
P
) I = ordering(V
P
)
P
1
P
2
P
3
P
4
P5 P
6
P7 P
8
P
9
P
1
P
2
P
3
P
4
P5 P
6
P7 P
8
P
9
s
1
0.137 0.135 0.142 0.140 0.142 0.148 0.146 0.145 0.150 3 2 5 4 6 8 7 1 9
s
2
0.141 0.149 0.137 0.142 0.138 0.161 0.142 0.143 0.133 4 8 3 5 6 9 7 1 2
s
3
0.134 0.148 0.148 0.130 0.135 0.159 0.163 0.124 0.140 3 5 7 2 4 8 9 1 6
s
4
0.156 0.146 0.147 0.154 0.154 0.120 0.130 0.129 0.146 8 4 5 7 9 2 3 1 6
s5 0.153 0.145 0.147 0.152 0.152 0.125 0.134 0.129 0.146 8 4 5 6 9 2 3 1 7
s
6
0.142 0.142 0.138 0.141 0.136 0.139 0.139 0.185 0.135 1 1 1 1 1 1 1 1 1
s7 0.137 0.135 0.142 0.140 0.142 0.148 0.146 0.145 0.150 3 2 5 4 6 8 7 1 9
0.629 0.54 0.638 0.604 0.852 0.817 0.777 0.143 0.835 =5, p*(A
3
) =p5(A
3
)
In the measurement and prioritization processes, the variances
of

P which are measured by

S(

P) with respect to each pairwise


matrix is shown in Table 4. The relative weights of the seven
measurement variances (or measurement priorities) of nine pri-
oritization operators of six pairwise matrices are determined by
(

S(

P)) which is to take the function NI() of the measurement


variances shown in Table 4, and then the measurement priority
matrices V

P
s of the six pairwise matrices are shown in Table 5.
In the aggregation and exploitation processes, a Mean Indi-
vidual Interest Aggregation Operator is used to aggregate
the measurement priority matrices V

P
s of the six pairwise
matrices (in Table 5). (Eq. (13)) returns the set of mean val-
ues of V

and I, where V

= CN(V

P
), and I = ordering(V

P
). The
set of the mean values are the set of the preference val-
ues of prioritization operators. The most appropriate operator
is located in the argument of the maximum of the prefer-
ence values, i.e. p*() =p

(), = arg max


j {1,2,...,m}
({u
1
, . . . u
q
, . . . , u
Q
}).
These results are shown in Tables 611. The highest prefer-
ence values of the selected POs are bolded in Tables 611,
and the values of selected POs are bolded and underlined in
Table 3.
5.1.4. Synthesis process
The results of the synthesis process with various prioritization
operators are shown in Table 3. Unlike other prioritization oper-
ators, an AHPP method is used to select the best prioritization
Table 9
Preference values of nine prioritization operators for A
4
(Example 1).
A
4
V

= CN(V
P
) I = ordering(V
P
)
P
1
P
2
P
3
P
4
P5 P
6
P7 P
8
P
9
P
1
P
2
P
3
P
4
P5 P
6
P7 P
8
P
9
s
1
0.135 0.141 0.136 0.135 0.143 0.135 0.140 0.157 0.171 3 6 2 4 7 8 5 1 9
s
2
0.134 0.146 0.136 0.139 0.130 0.169 0.138 0.170 0.125 3 8 2 7 4 9 6 5 1
s
3
0.113 0.140 0.130 0.127 0.109 0.268 0.137 0.155 0.093 3 8 5 6 2 9 7 4 1
s
4
0.162 0.139 0.149 0.156 0.160 0.102 0.144 0.110 0.163 7 3 6 8 9 2 5 1 4
s5 0.160 0.140 0.147 0.154 0.159 0.105 0.142 0.112 0.166 7 3 5 8 9 2 4 1 6
s
6
0.156 0.153 0.159 0.149 0.150 0.100 0.155 0.140 0.126 4 4 4 4 4 1 4 1 1
s7 0.141 0.140 0.143 0.140 0.149 0.122 0.145 0.156 0.157 5 6 4 8 9 2 7 1 3
0.675 0.776 0.576 0.925 0.936 0.824 0.772 0.306 0.571 =5, p*(A5) =p5(A5)
Table 10
Preference values of prioritization operators for A5 (Example 1).
A5 V

= CN(V
P
) I = ordering(V
P
)
PI P
2
P
3
P
4
P5 P
6
P7 P
8
P
9
P
1
P
2
P
3
P
4
P5 P
6
P7 P
8
P
9
s
1
0.124 0.118 0.161 0.123 0.127 0.157 0.165 0.153 0.176 5 2 3 7 6 8 4 1 9
s
2
0.123 0.133 0.145 0.128 0.122 0.182 0.151 0.170 0.146 7 6 1 8 4 9 2 3 5
s
3
0.121 0.171 0.110 0.135 0.107 0.232 0.114 0.190 0.108 5 8 1 7 4 9 2 6 3
s
4
0.150 0.116 0.166 0.144 0.153 0.120 0.159 0.120 0.159 8 2 5 7 9 3 4 1 6
s5 0.149 0.115 0.167 0.143 0.152 0.121 0.160 0.122 0.160 8 2 5 7 9 3 4 1 6
s
6
0.202 0.221 0.093 0.198 0.205 0.045 0.090 0.095 0.082 6 6 2 6 6 1 2 2 2
s7 0.131 0.125 0.158 0.130 0.134 0.142 0.161 0.149 0.169 6 2 3 8 7 5 4 1 9
0.925 0.635 0.438 1.009 0.942 0.923 0.470 0.341 0.891 =4, p*(A5) =p
4
(A5)
986 K.K.F. Yuen / Applied Soft Computing 10 (2010) 975989
Table 11
Preference values of nine prioritization operators for A
6
(Example 1).
A
6
V

= CN(V
P
) I = ordering(V
P
)
P
1
P
2
P
3
P
4
P5 P
6
P7 P
8
P
9
P
1
P
2
P
3
P
4
P5 P
6
P7 P
8
P
9
s
1
0.123 0.150 0.139 0.134 0.138 0.153 0.144 0.144 0.161 4 6 3 5 7 8 2 1 9
s
2
0.133 0.148 0.151 0.135 0.135 0.151 0.160 0.158 0.130 4 7 3 6 8 9 2 1 5
s
3
0.133 0.129 0.183 0.120 0.119 0.132 0.203 0.209 0.113 7 4 8 3 5 6 9 1 2
s
4
0.155 0.154 0.151 0.150 0.150 0.151 0.129 0.088 0.127 7 5 3 8 9 6 2 1 4
s5 0.152 0.154 0.149 0.150 0.149 0.150 0.130 0.093 0.132 7 6 3 8 9 5 2 1 4
s
6
0.172 0.118 0.103 0.166 0.160 0.115 0.109 0.177 0.166 6 4 1 6 6 4 1 1 6
s7 0.132 0.146 0.124 0.144 0.149 0.148 0.125 0.132 0.172 4 5 3 7 8 6 2 1 9
0.809 0.765 0.530 0.893 1.069 0.911 0.474 0.143 0.843 =5, p*(A
6
) =p5(A
6
)
Table 12
Comparison of AHPP and MPS [21] (Example 1).
C
1
C
2
C
3
C
4
C5 Global priorities
AHPP MPS AHPP MPS AHPP MPS AHPP MPS AHPP MPS AHPP MPS
(AMNC,AMNC) 0.352 0.352 0.300 0.300 0.043 0.043 0.172 0.172 0.133 0.133
p* AMNC AMNC LLS LGP LLS AMNC AMNC AMNC LLS LLS
t
1
0.432 0.432 0.384 0.391 0.432 0.403 0.413 0.413 0.053 0.053 0.3639(6) 0.3648(6)
t
2
0.082 0.082 0.108 0.098 0.167 0.178 0.151 0.151 0.212 0.212 0.1228(2) 0.1201(2)
t
3
0.281 0.281 0.071 0.065 0.034 0.034 0.036 0.036 0.193 0.193 0.1538(5) 0.1517(5)
t
4
0.066 0.066 0.041 0.056 0.106 0.107 0.043 0.043 0.326 0.326 0.0909(1) 0.0954(1)
t5 0.081 0.081 0.204 0.195 0.154 0.159 0.129 0.129 0.167 0.167 0.1407(4) 0.1382(4)
t
6
0.057 0.057 0.192 0.195 0.108 0.119 0.228 0.228 0.049 0.049 0.1279(3) 0.1294(3)
Italic form indicates the notation representing functions.
method among the candidates to prioritize the pairwise matrix
or reciprocal matrix since the best prioritization method is case
dependent. The case means the content of the pairwise matrix.
A comparison between AHPP and MPS [21] for Example 1
is shown in Table 12. Although the best prioritization operators
with respect to C
2
and C
3
are not the same, their ranks are the
same, but nal utilities are different. Regarding the methodology,
AHPP is regarded to be superior to MPS because of the following
reasons:
1. AHPP uses more measurement criteria to evaluate the perfor-
mance of each individual prioritization operator. Appling more
related measurement criteria means more objective or less
biased to reect the actual performance of the prioritization
operator.
Table 13
Priority vectors and synthesis results with various prioritization operators (Example 2).
C
1
C
2
C
3
C
4
C5 C
6

W C
1
C
2
C
3
C
4
C5 C
6

W
P
1
: EV P
2
: NRS
C 0.321 0.140 0.035 0.128 0.237 0.139 0.242 0.188 0.031 0.131 0.232 0.175
t
1
0.157 0.333 0.455 0.772 0.250 0.691 0.367 2 0.151 0.333 0.455 0.695 0.250 0.657 0.378 3
t
2
0.594 0.333 0.091 0.055 0.500 0.091 0.378 3 0.575 0.333 0.091 0.054 0.500 0.090 0.344 2
t
3
0.249 0.333 0.455 0.173 0.250 0.218 0.254 1 0.274 0.333 0.455 0.251 0.250 0.254 0.279 1
P
3
: NRCS P
4
: AMNC
C 0.381 0.105 0.045 0.131 0.211 0.127 0.305 0.149 0.038 0.141 0.221 0.146
t
1
0.169 0.333 0.455 0.809 0.250 0.711 0.369 2 0.159 0.333 0.455 0.750 0.250 0.685 0.377 3
t
2
0.607 0.333 0.091 0.068 0.500 0.101 0.397 3 0.589 0.333 0.091 0.060 0.500 0.093 0.365 2
t
3
0.225 0.333 0.455 0.124 0.250 0.189 0.234 1 0.252 0.333 0.455 0.190 0.250 0.221 0.258 1
P5: NGMR/LLS P
6
: DLS
C 0.316 0.139 0.036 0.125 0.236 0.148 0.184 0.220 0.037 0.150 0.210 0.197
t
1
0.157 0.333 0.455 0.772 0.250 0.691 0.370 2 0.178 0.333 0.455 0.788 0.250 0.687 0.430 3
t
2
0.594 0.333 0.091 0.055 0.500 0.091 0.376 3 0.592 0.333 0.091 0.082 0.500 0.108 0.325 2
t
3
0.249 0.333 0.455 0.173 0.250 0.218 0.254 1 0.230 0.333 0.455 0.130 0.250 0.205 0.245 1
P7: WLS P
8
: FP
C 0.415 0.094 0.035 0.112 0.219 0.125 0.349 0.144 0.053 0.123 0.192 0.139
t
1
0.174 0.333 0.455 0.804 0.250 0.707 0.353 2 0.159 0.333 0.455 0.796 0.250 0.703 0.371 2
t
2
0.606 0.333 0.091 0.074 0.500 0.107 0.417 3 0.619 0.333 0.091 0.082 0.500 0.109 0.390 3
t
3
0.221 0.333 0.455 0.122 0.250 0.187 0.230 1 0.222 0.333 0.455 0.122 0.250 0.188 0.239 1
P
9
: EGP AHPP
C 0.431 0.131 0.027 0.137 0.144 0.131 0.305 0.149 0.038 0.141 0.221 0.146
t
1
0.157 0.333 0.455 0.772 0.250 0.691 0.355 2 0.178 0.333 0.455 0.788 0.250 0.687 0.388 3
t
2
0.594 0.333 0.091 0.055 0.500 0.091 0.393 3 0.592 0.333 0.091 0.082 0.500 0.108 0.371 2
t
3
0.249 0.333 0.455 0.173 0.250 0.218 0.251 1 0.230 0.333 0.455 0.130 0.250 0.205 0.241 1
K.K.F. Yuen / Applied Soft Computing 10 (2010) 975989 987
Table 14
Comparison of AHPP and MPS [21] (Example 2).
C
1
C
2
C
3
C
4
C5 C
6
Global priorities
AHPP MPS AHPP MPS AHPP MPS AHPP MPS AHPP MPS AHPP MPS AHPP MPS
(AN,AN) 0.305 0.149 0.038 0.141 0.221 0.146
p* DLS WLSM Any Any DLS FPM Any DLS FPM
t
1
0.178 0.174 0.333 0.455 0.788 0.796 0.250 0.687 0.703 0.388(3) 0.390(3)
t
2
0.592 0.606 0.333 0.091 0.082 0.082 0.500 0.108 0.109 0.371(2) 0.376(2)
t
3
0.230 0.221 0.333 0.455 0.130 0.122 0.250 0.205 0.188 0.241(1) 0.234(1)
Italic form indicates the notation representing functions.
2. AHPP uses objective relative weights for the measurement cri-
teria rather than subjective weights for them, which are used by
MPS.
3. In AHPP, the prioritization operators selected are calculated
by Mean Individual Interest based Aggregation Operator. This
method cares the individual interest of each prioritization oper-
ator. However, MPS just applies a universal value withsubjective
judgment. If more criteria are considered in MPS, it is difcult to
determine the weights as coefcients of the linear equation.
4. Measurement criteria of MPS are very limited and less exible.
Minimum violations (MV) criterion [26] used by MPS [21] is ill-
dened, and the correct form is discussed in Section 4.6. MPS
used the ill-dened MV and likely produced incorrect results.
Measurement criteria of AHPPS are determined by analysts. This
research introduces seven criteria for illustration.
5.2. Example 2
Example 2 is to select the high school [3, pp. 2628]. The def-
initions of the ve criteria, four alternatives and seven pairwise
matrices are illustrated in Appendix B. The reciprocal matrices
include three consistent matrices, i.e. A
3
, A
4
, A
6
, and four non-
consistent matrices, e.g. A
1
, A
2
, A
5
, A
7
. For the consistent matrices,
any prioritization operator can be used as the value of the priority
vector is the same. For the inconsistent matrices, AHPP is applied.
The local priorities and global priorities with nine prioritization
operators are summarized in Table 13. It can be observed that dif-
ferent prioritization operators produce different priorities, which
likely lead to different preference orders. This research denes that
higher score means higher preference. To address this problem,
similar to Example 1, the AHPP is applied to select the best prior-
itization operator which is tailor-made for each pairwise matrix.
For the non-consistent matrices, in Table 13, the values bolded and
underlined are selected as the priority vectors in columns W

T,i
or
in row W
C
.
The results of using AHPP are illustrated in Table 14. The AHPP
solution is the combination of prioritization operators: AMNC/AN
(p
4
), DLS (p
6
), any, any, DLS (p
6
), any, DLS (p
6
), corresponding to
pairwise matrices A
1
, A
2
, . . ., A
7
. The comparison of AHPP and MPS
[21] is also shown in Table 14. Although the global priorities of
the two methods are not the same, interestingly, both AHPP and
MPS [21] produce the same preference orders, which is however
different from ones obtained by EVM in [3].
6. Conclusions
A core issue in AHP is how to derive the priority vector from a
reciprocal matrix lledby verbal judgment representedby numeri-
cal values fromdecisionmakers. There are various studies tohandle
this problem. However, different prioritization operators produce
different values for a priority vector. This is also shown in the two
applications. To address this issue, this paper proposes the AHPP
model which is to select the most appropriate prioritization oper-
ator among a list of the operator candidates according to a list
of measurement criteria. In this study, nine prioritization opera-
tors are selected and seven measurement criteria are proposed for
building the AHPP model. The Compound AHP is the AHP problem
applying AHPP. In general, CAHP applies a combination of prioriti-
zation operators to prioritize all reciprocal matrices in the problem
domains, rather thantouseonesingleuniversal prioritizationoper-
ator to all cases.
For the classical validation approach, many studies propose
a large number of reciprocal matrices with attemptions to con-
clude that their proposed prioritization methods (or prioritization
operator) are superior to others in general. However, as the most
appropriateprioritizationoperator is infact onacase-by-casebasis.
The fact that a prioritization operator is superior to others in gen-
eral does not followthat every case should apply a unique operator.
Inother words, it is not necessary to attempt to conclude whichpri-
oritization operator is the best in general (in general means being
similar to statistical conclusion) unless there is a best prioritization
operator with all inconsistent reciprocal matrices.
Rather than the tests of many reciprocal matrices, two appli-
cations in this study are sufcient to illustrate the usability and
validity of AHPP. The illustrated applications showthe AHPP model
does not favorite any prioritization operator. For each reciprocal
matrix, only the ttest prioritization operator selected by AHPP is
regarded as the best priority vector for the reciprocal matrix. (In
other words, the best prioritization operator is tailor-made for an
individual reciprocal matrix.) The best vectors of all matrices are
propagated in the synthesis process.
If all pairwisereciprocal matrices of theAHPareperfectlyconsis-
tent, the mixed prioritization operators approach such as AHPP or
MPS is not recommended since prioritization of a consistent matrix
has a single solution only regardless of the prioritization operator.
If a signicant number of matrices of the AHP is inconsistent such
that 0<CR0.1, the AHPP approach is highly recommended since
both examples showthat a pure aggregation operator method can-
not guarantee the optimal result. The contribution of the AHPP is
to provide guidelines for decision makers to compare the strengths
and weaknesses of prioritization operators and choose the most
appropriate operator, especially under the inconsistent matrices of
the AHP.
Acknowledgments
Theauthor wishes tothanktheResearchOfceof TheHongKong
Polytechnic University for support in this research project. Thanks
are also extended to the editors and anonymous referees for their
constructive feedbacks to improve this work.
Appendix A. Example 1
The data of this example is taken from[32]. CI and CR are added
in this research.
988 K.K.F. Yuen / Applied Soft Computing 10 (2010) 975989
Criteria Alternatives
c
1
: national income t
1
: electric power generation
c
2
: foreign exchange t
2
: irrigation
c
3
: balance of payment t
3
: ood protection
c
4
: import substitution t
4
: water supply
c5: regional gains t5: tourism and recreation
t
6
: river trafc
Criteria (A
1
) National income (A
2
)
C
1
C
2
C
3
C
4
C5 t
1
t
2
t
3
t
4
t5 t
6
C
1
1 2 5 3 2 t
1
1 5 3 6 7 5
C
2
1 7 3 3 t
2
1 1/7 1/2 2 2
C
3
1 1/4 1/5 t
3
1 7 3 4
C
4
1 3 t
4
1 1/2 1
C5 1 t5 1 2
t
6
1
CI =0.1043, CR=0.093 CI =0.1122, CR=0.091
Foreign exchange (A
3
) Balance of payment (A
4
)
t
1
t
2
t
3
t
4
t5 t
6
t
1
t
2
t
3
t
4
t5 t
6
t
1
1 4 6 7 2 2 t
1
1 3 7 6 3 4
t
2
1 2 2 1 1/3 t
2
1 5 2 3 1/2
t
3
1 2 1/6 1 t
3
1 1/4 1/7 1/3
t
4
1 1/5 1/7 t
4
1 1/2 2
t5 1 1 t5 1 2
t
6
1 t
6
1
CI =0.0954, CR=0.0769 CI =0.0499, CR=0.040
Import substitution (A5) Regional gains (A
6
)
t
1
t
2
t
3
t
4
t5 t
6
t
1
t
2
t
3
t
4
t5 t
6
t
1
1 3 9 7 4 3 t
1
1 1/5 1/3 1/6 1/3 1
t
2
1 3 6 2 1/3 t
2
1 2 1/5 2 4
t
3
1 1/2 1/4 1/5 t
3
1 1 2 3
t
4
1 1/6 1/6 t
4
1 1 7
t5 1 1/2 t5 1 5
t
6
1 t
6
1
CI =0.0213, CR=0.017 CI =0.1769, CR=0.143
Appendix B. Example 2
The data of this example is taken from [3, pp. 2628].
Criteria Alternatives
C
1
: learning t
1
: school A
C
2
: friends t
2
: school B
C
3
: school life t
3
: school C
C
4
: vocational training
C5: college preparation
C
6
: music classes
Criteria (A
1
)
C
1
C
2
C
3
C
4
C5 C
6
C
1
1 4 3 1 3 4
C
2
1/4 1 7 3 1/5 1
C
3
1/3 1/7 1 1 1 1/6
C
4
1 1/3 5 1 1 1/3
C5 1/3 5 5 1 1 3
C
6
1/4 1 6 3 1/3 1
CI =0.3, CR=0.24
Learning (A
2
) Friends (A
3
) School life (A
4
)
t
1
t
2
t
3
t
1
t
2
t
3
t
1
t
2
t
3
t
1
1 1/3 1/2 t
1
1 1 1 t
1
1 5 1
t
2
3 1 3 t
2
1 1 1 t
2
1/5 1 1/5
t
3
2 1/3 1 t
3
1 1 1 t
3
1 5 1
CI =0.025, CR=0.04 CI =CR=0 CI =CR=0
Vocational training(A5) Collegepreparation(A
6
) Music classes (A7)
t
1
t
2
t
3
t
1
t
2
t
3
t
1
t
2
t
3
t
1
1 9 7 t
1
1 1/2 1 t
1
1 6 4
t
2
1/9 1 1/5 t
2
2 1 2 t
2
1/6 1 1/3
t
3
1/7 5 1 t
3
1 1/2 1 t
3
1/4 3 1
CI =0.105, CR=0.18 CI =CR=0 CI =CR=0.04
Appendix C. Notation summary
(D, E, , S, M) Denition of Compound AHP
D Denition process D=(D
1
, D
2
)
D
1
The AHP problem denition process D
1
= (O, C,

T, ).
O An objective
C A set of criteria C={c
1
, c
2
, . . ., c
i
, . . ., cn}

T A set of alternatives

T = {t
1
, t
2
, . . . , t
j
, . . . , tm}
A rating scale schema = {
1
,
2
, . . . ,
i
, . . . , p}
D
2
The AHPP denition process D
2
= (S,

S,

P)
S A set of measurement criteria S ={s
1
, s
2
, . . ., sq, . . ., s
Q
}

P A set of prioritization operators



P = {p
1
, p
2
, . . . , p
k
, . . . , pK }

Sq A measurement function to measure



P under a measurement
criterion sq such that

Sq

S = {

S
1
, . . . ,

S
Q
}
E The evaluation and assessment process
E = ((C),

(c
i
,

T)
_
n
i=1
)
() An assessment function
A A pairwise matrix which basically has two forms depending on
the candidates, e.g. A
C
= (C), A
T,i
= (c
i
,

T)

A = [ a
ij
] An ideal consistent judgment matrix generated from W
W The priority set (or normalized weight set) which generally has
two form W
C
or W
T,i
, which is generated from A
C
or A
T,i
. In the
AHPP, the ideal W is determined by W=(A) =P*(A)
M Measurement process
CI Consistency index
RI Random index
CR Consistency ratio
max: A principal eigenvalue
Analytic hierarchy prioritization process =(A,(D
2
, MP, Agg,
EP),W)
MP Measurement and prioritization process MP = (, V
P
)
V
P
A measurement priority matrix
V
P
= (vs
1
, vs
2
, . . . , vsq
, . . . , vs
Q
)
T
= (v
qk
)
QK
() A transformation function
NI() A normalization of inversion function
Agg Aggregation process
A Maximum Individual Interest Aggregation Operator, which is
the form (V
P
) = (V

, I) = (CN(V
P
), ordering(V
P
)) =
{u
1
, . . . u
k
, . . . , uK } =
CN() Column normalization function
ordering() An ordering function which returns a set of rank positions in
ascending order
I The matrix from I = ordering(V
P
) = (i

qk
)
QK
A set of preference values of prioritization operators ={u
1
,
. . ., u
k
, . . ., uK}
* The highest score of {u
1
, . . ., u
k
, . . ., uK} such that

= Max()
EP The exploitation process EP = (p

, f (

P, ), W)
f (

P, ) The prioritization selection function which returns the best


prioritization method p* in

P with respect to = (V
P
)
argmax() The argument of the maximum function
: A position which is returned by arg max
j {1,2,...,m}
({u
1
, . . . uq, . . . , u
Q
})
p* The best PO determined by p

= f (

P, ) = p
s The synthesis process S = ((W
C
, W
T
), t

, f ), W
T
=
_
W
T,i
_
t* The best alternative determined by
t

= f (W
C
, W
T
) = (

W) = (sdp(W
C
, W
T
))
f() A PO selection function
(

W): (

W) = t , where = arg max
j {1,2,...,m}
({ w
1
, w
2
, . . . , w
j
, . . . , wm})
Sagg A synthesis aggregation function
sdp A Scalar Dot Product function
EV Eigenvector function
NRS Normalization of the row sum function
NRCS Normalization of reciprocals of column sum function
AMNC Arithmetic mean of normalized columns function
NGMR Normalization of geometric means of rows function
DLS/WLS Direct least squares/weighted least squares
LLS Logarithm least squares
K.K.F. Yuen / Applied Soft Computing 10 (2010) 975989 989
FP Fuzzy programming
EGP Enhanced goal programming
MAV Mean absolute variance
RMSV Root mean square variance
WAV Worst absolute variance
CV Consistency variance
GCV Geometric consistency variance
MMV Mean minimum violation
MV Minimum violation
WDV Weighted distance variance
References
[1] L.L. Thurstone, A law of comparative judgements, Psychological Review 34
(1927) 273286.
[2] Y. Dong, Y. Xu, H. Li, M. Dai, A comparative study of the numerical scales and
the prioritization methods in AHP, European Journal of Operational Research
186 (1) (2008) 229242.
[3] T.L. Saaty, Analytic Hierarchy Process: Planning, Priority, Setting Resource Allo-
cation, McGraw-Hill, New York, 1980.
[4] T.L. Saaty, How to make a decision: the analytic hierarchy process, European
Journal of Operational Research 48 (1) (1990) 926.
[5] T.L. Saaty, Fundamentals of Decision Making, RSW Publications, 1994.
[6] T.L. Saaty, Fundamentals of Decision Making and Priority Theory with the Ana-
lytic Hierarchy Process, RWS Publications, Pittsburgh, 2000.
[7] T.L. Saaty, Theory and Applications of the Analytic Network Process, Decision
Making with Benets, Opportunities Costs and Risks, RWS Publications, Pitts-
burgh, 2005.
[8] F. Zaidedi, The analytic hierarchy process: a survey of the method and its appli-
cation, Interfaces 16 (4) (1986) 96108.
[9] O.S. Vaidya, S. Kumar, Analytic hierarchy process: an overview of applications,
European Journal of Operational Research 169 (1) (2006) 129.
[10] V. Belton, T. Gear, On a shortcoming of Saatys method of analytic hierarchies,
Omega 11 (3) (1983) 228230.
[11] J.S. Dyer, Remarks on the analytic hierarchy process, Management Science 36
(3) (1990) 249258.
[12] J.S. Dyer, A clarication of remarks on the analytic hierarchy process, Manage-
ment Science 36 (3) (1990) 274275.
[13] C.K. Murphy, Limits on the analytic hierarchy process from its consistency
index, European Journal of Operational Research 65 (1) (1993) 138139.
[14] J. Perez, Some comments on Saatys AHP, Management Science 41 (6) (1995)
10911095.
[15] E.H. Forman, S.I. Gass, The analytic hierarchy process: anexposition, Operations
Research 49 (4) (2001) 469486.
[16] G. Crawford, C. Williams, A note on the analysis of subjective judgement matri-
ces, Journal of Mathematical Psychology 29 (4) (1985) 387405.
[17] L. Mikhailov, A fuzzy programming method for deriving priorities in the ana-
lytic hierarchy process, Journal of the Operational Research Society 51 (3)
(2000) 341349.
[18] E.U. Choo, W.C. Wedley, A common framework for deriving preference values
frompairwise comparison matrices, Computer and Operations Research 31 (6)
(2004) 893908.
[19] C.-C. Lin, An enhanced goal programming method for generating priority
vectors, Journal of the Operational Research Society 57 (12) (2006) 1491
1496.
[20] L. Mikhailov, M.G. Sing, Comparison analysis of methods for deriving priori-
ties in the analytic hierarchy process, in: Proceedings of the IEEE International
Conference on Systems, Man and Cybernetics, 1999, pp. 10371042.
[21] B. Srdjevic, Combining different prioritization methods in the analytic hier-
archy process synthesis, Computers and Operations Research 32 (7) (2005)
18971919.
[22] T.L. Saaty, L.G. Vagas, Comparison of eigenvalue, logarithmic least squares
and least squares methods in estimating ratios, Mathematical Modelling 5 (5)
(1984) 309324.
[23] D.V Budescu, R. Zwick, A. Rapoport, A comparison of the eigenvalue method
and the geometric mean procedure for ratio scaling, Applied Psychological
Measurement 10 (1) (1986) 6978.
[24] F. Zahedi, A simulation study of estimation methods in analytic hierarchy pro-
cess, Socio-Economic Planning Sciences 20 (6) (1986) 347354.
[25] E. Blankmeyer, Approaches to consistency adjustment, Journal of Optimization
Theory and Applications 54 (3) (1987) 479488.
[26] B. Golany, M. Kress, Amulticriteria evaluationof methods for obtaining weights
from ratio-scale matrices, European Journal of Operational Research 69 (2)
(1993) 210220.
[27] F.A. Lootsma, A model for the relative importance of the criteria in the mul-
tiplicative AHP and SMART, European Journal of Operational Research 94 (3)
(1996) 467476.
[28] J. Barzilai, Deriving weights from pairwise comparison matrices, Journal of the
Operational Research Society 48 (12) (1997) 12261232.
[29] N. Bryson, A goal programming method for generating priorities vectors, Jour-
nal of the Operational Research Society 46 (5) (1995) 641648.
[30] A.T.W. Chu, R.E. Kalaba, K. Springarn, A comparison of two methods for deter-
mining the weights of belonging to fuzzy sets, Journal of Optimization Theory
and Applications 27 (4) (1979) 531541.
[31] J. Aguaron, J.M. Moreno-Jimenez, The geometric consistency index: approxi-
mated thresholds, European Journal of Operational Research 147 (1) (2003)
137145.
[32] B. Srdjevic, Evaluation of storage reservoir purposes by means of multicrite-
ria optimization models, Journal on Water Resources Vodoprivreda 195200
(2002) 3545.

You might also like