Professional Documents
Culture Documents
where x
1
= inf S, x
2
= sup S, S =
S
m
i=1
F
i
,
F
i
= x u
Fi
x ( ) > 0 [ ; i = 1; 2; . . . ; m.
Dene the optimistic utility U
M
(F
i
) and pessimistic utility
U
G
(F
i
) of each fuzzy number as
U
M
F
i
( ) = sup
x
l
Fi
x ( ) . l
M
x ( ) ( ) ;
and
U
G
F
i
( ) = 1 sup
x
l
Fi
x ( ) . l
G
x ( ) ( ) ;
for i = 1; 2; . . . ; m, where . means min.
Dene ranking value U
T
(F
i
) of fuzzy number as
U
T
F
i
( ) = bU
M
F
i
( ) 1 b ( )U
G
F
i
( ); 0 _ b _ 1 :
The value b is an index of rating attitude. It reects the
decision maker's risk-bearing attitude. If b > 0:5, it im-
plies that the decision maker is a risk lover. If b < 0:5, the
decision maker is a risk avertor. In this paper, b = 0:5 is
applied, i.e., the attitude of decision maker is neutral to the
risk. The ranking values U
T
(F
i
) can be approximately
obtained by
U
T
F
i
( ) = [ Z
i
x
1
( )= x
2
x
1
Q
i
Z
i
( )
1 x
2
Y
i
( )= x
2
x
1
Q
i
Y
i
( )[=2 (3)
for i = 1; 2; . . . ; m; where x
1
= min Y
1
; Y
2
; . . . ; Y
m
,
x
2
= max Z
1
; Z
2
; . . . ; Z
m
, and F
i
= (Y
i
; Q
i
; Z
i
).
2.3
Markov chain
Markovian property means that a random dynamic system
that if its present state is known, then its past state has no
inuence on the future state. Dynamic programming de-
termines the optimum solution to an n-variable problem
by decomposing it into n stages with each stage com-
prising a single variable subproblem. Markovian dynamic
programming is a process of probability. It presents an
application of dynamic programming to the situation of a
stochastic decision process that can be described by a -
nite number of states. The transition probabilities between
the states are described by a Markov chain. The reward
structure of the process is also described by a matrix
whose individual elements represent the revenue (or cost)
resulting from moving from one state to another. Both the
transition and revenue matrices depend on the decision
alternatives available to the decision-maker. The objective
of the problem is to determine the optimal policy that
maximizes the expected revenue over a nite or innite
number of states [14].
In this paper the transition matrix in the Markov chain
is used to predict lessees' granting credit after the lease.
Some notations are dened as follow. Let
P
1
: the transition probabilities before leasing
P
2
: the transition probabilities after leasing
p
1
ij
: before leasing, the probability that if the corporate is ith
state on the rst stage, then it moves to jth state on the
second stage
p
2
ij
: after leasing, the probability that if the corporate is ith
state on the rst stage, then it moves to jth state on the
second stage
P
1
=
p
1
11
p
1
12
p
1
13
p
1
21
p
1
22
p
1
23
p
1
31
p
1
32
p
1
33
2
4
3
5
P
2
=
p
2
11
p
2
12
p
2
13
p
2
21
p
2
22
p
2
23
p
2
31
p
2
32
p
2
33
2
4
3
5
i = j = 1: the breakeven state is ``good''
i = j = 2: the breakeven state is ``fair''
i = j = 3: the breakeven state is ``poor''
R
1
=
r
1
ij
S
b
r
2
ij
S
a