You are on page 1of 254

Uncertainty Theory

Third Edition

Baoding Liu
Uncertainty Theory Laboratory
Department of Mathematical Sciences
Tsinghua University
Beijing 100084, China
liu@tsinghua.edu.cn
http://orsc.edu.cn/liu

c 2008 by UTLAB
3rd Edition
c 2008 by WAP
Japanese Translation Version
c 2007 by Springer-Verlag Berlin
2nd Edition
c 2004 by Springer-Verlag Berlin
1st Edition

Reference to this book should be made as follows:


Liu B, Uncertainty Theory, 3rd ed., http://orsc.edu.cn/liu/ut.pdf

Contents
Preface

ix

1 Probability Theory
1.1 Probability Space . . . . . . . .
1.2 Random Variables . . . . . . . .
1.3 Probability Distribution . . . . .
1.4 Independence . . . . . . . . . . .
1.5 Identical Distribution . . . . . .
1.6 Expected Value . . . . . . . . .
1.7 Variance . . . . . . . . . . . . .
1.8 Moments . . . . . . . . . . . . .
1.9 Critical Values . . . . . . . . . .
1.10 Entropy . . . . . . . . . . . . . .
1.11 Distance . . . . . . . . . . . . .
1.12 Inequalities . . . . . . . . . . . .
1.13 Convergence Concepts . . . . . .
1.14 Conditional Probability . . . . .
1.15 Stochastic Process . . . . . . . .
1.16 Stochastic Calculus . . . . . . .
1.17 Stochastic Differential Equation

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

2 Credibility Theory
2.1 Credibility Space . . . .
2.2 Fuzzy Variables . . . .
2.3 Membership Function .
2.4 Credibility Distribution
2.5 Independence . . . . . .
2.6 Identical Distribution .
2.7 Expected Value . . . .
2.8 Variance . . . . . . . .
2.9 Moments . . . . . . . .
2.10 Critical Values . . . . .
2.11 Entropy . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

53
. 53
. 63
. 65
. 69
. 74
. 79
. 80
. 92
. 95
. 96
. 100

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

1
1
4
7
11
13
14
23
25
26
28
32
32
35
40
43
47
50

vi

Contents

2.12
2.13
2.14
2.15
2.16
2.17
2.18

Distance . . . . . . . . . .
Inequalities . . . . . . . . .
Convergence Concepts . . .
Conditional Credibility . .
Fuzzy Process . . . . . . .
Fuzzy Calculus . . . . . . .
Fuzzy Differential Equation

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

106
107
110
114
119
124
128

3 Chance Theory
3.1 Chance Space . . . . . . . .
3.2 Hybrid Variables . . . . . . .
3.3 Chance Distribution . . . . .
3.4 Expected Value . . . . . . .
3.5 Variance . . . . . . . . . . .
3.6 Moments . . . . . . . . . . .
3.7 Independence . . . . . . . . .
3.8 Identical Distribution . . . .
3.9 Critical Values . . . . . . . .
3.10 Entropy . . . . . . . . . . . .
3.11 Distance . . . . . . . . . . .
3.12 Inequalities . . . . . . . . . .
3.13 Convergence Concepts . . . .
3.14 Conditional Chance . . . . .
3.15 Hybrid Process . . . . . . . .
3.16 Hybrid Calculus . . . . . . .
3.17 Hybrid Differential Equation

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

129
129
137
143
146
149
151
151
153
153
156
157
157
160
164
169
171
174

4 Uncertainty Theory
4.1 Uncertainty Space . . . .
4.2 Uncertain Variables . . .
4.3 Identification Function .
4.4 Uncertainty Distribution
4.5 Expected Value . . . . .
4.6 Variance . . . . . . . . .
4.7 Moments . . . . . . . . .
4.8 Independence . . . . . . .
4.9 Identical Distribution . .
4.10 Critical Values . . . . . .
4.11 Entropy . . . . . . . . . .
4.12 Distance . . . . . . . . .
4.13 Inequalities . . . . . . . .
4.14 Convergence Concepts . .
4.15 Conditional Uncertainty .
4.16 Uncertain Process . . . .
4.17 Uncertain Calculus . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

177
177
181
184
185
187
190
191
193
193
194
195
196
197
200
201
205
209

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

Contents

vii

4.18 Uncertain Differential Equation . . . . . . . . . . . . . . . . 211


A Measurable Sets

213

B Classical Measures

216

C Measurable Functions

218

D Lebesgue Integral

221

E Euler-Lagrange Equation

224

F Maximum Uncertainty Principle

225

G Uncertainty Relations

226

Bibliography

229

List of Frequently Used Symbols

244

Index

245

viii

Contents

Preface
There are various types of uncertainty in the real world. Randomness is a
basic type of objective uncertainty, and probability theory is a branch of
mathematics for studying the behavior of random phenomena. The study
of probability theory was started by Pascal and Fermat (1654), and an axiomatic foundation of probability theory was given by Kolmogoroff (1933) in
his Foundations of Probability Theory. Probability theory has been widely
applied in science and engineering. Chapter 1 will provide the probability
theory.
Fuzziness is a basic type of subjective uncertainty initiated by Zadeh
(1965). Credibility theory is a branch of mathematics for studying the behavior of fuzzy phenomena. The study of credibility theory was started by
Liu and Liu (2002), and an axiomatic foundation of credibility theory was
given by Liu (2004) in his Uncertainty Theory. Chapter 2 will introduce the
credibility theory.
Sometimes, fuzziness and randomness simultaneously appear in a system. A hybrid variable was proposed by Liu (2006) as a tool to describe the
quantities with fuzziness and randomness. Both fuzzy random variable and
random fuzzy variable are instances of hybrid variable. In addition, Li and
Liu (2007) introduced the concept of chance measure for hybrid events. After
that, chance theory was developed steadily. Essentially, chance theory is a
hybrid of probability theory and credibility theory. Chapter 3 will offer the
chance theory.
In order to deal with general uncertainty, Liu (2007) founded an uncertainty theory in his Uncertainty Theory, and had it become a branch of
mathematics based on normality, monotonicity, self-duality, and countable
subadditivity axioms. Probability theory, credibility theory and chance theory are three special cases of uncertainty theory. Chapter 4 is devoted to the
uncertainty theory.
For this new edition the entire text has been totally rewritten. More
importantly, uncertain process and uncertain calculus as well as uncertain
differential equations have been added.
The book is suitable for mathematicians, researchers, engineers, designers, and students in the field of mathematics, information science, operations
research, industrial engineering, computer science, artificial intelligence, and
management science. The readers will learn the axiomatic approach of uncertainty theory, and find this work a stimulating and useful reference.

Preface

Baoding Liu
Tsinghua University
http://orsc.edu.cn/liu
March 5, 2008

Chapter 1

Probability Theory
Probability measure is essentially a set function (i.e., a function whose argument is a set) satisfying normality, nonnegativity and countable additivity
axioms. Probability theory is a branch of mathematics for studying the behavior of random phenomena. The emphasis in this chapter is mainly on
probability space, random variable, probability distribution, independence,
identical distribution, expected value, variance, moments, critical values, entropy, distance, convergence almost surely, convergence in probability, convergence in mean, convergence in distribution, conditional probability, stochastic
process, renewal process, Brownian motion, stochastic calculus, and stochastic differential equation. The main results in this chapter are well-known.
For this reason the credit references are not provided.

1.1

Probability Space

Let be a nonempty set, and A a -algebra over . If is countable, usually


A is the power set of . If is uncountable, for example = [0, 1], usually
A is the Borel algebra of . Each element in A is called an event.
In order to present an axiomatic definition of probability, it is necessary
to assign to each event A a number Pr{A} which indicates the probability
that A will occur. In order to ensure that the number Pr{A} has certain
mathematical properties which we intuitively expect a probability to have,
the following three axioms must be satisfied:
Axiom 1. (Normality) Pr{} = 1.
Axiom 2. (Nonnegativity) Pr{A} 0 for any event A.
Axiom 3. (Countable Additivity) For every countable sequence of mutually

Chapter 1 - Probability Theory

disjoint events {Ai }, we have


( )

[
X
Pr
Ai =
Pr{Ai }.
i=1

(1.1)

i=1

Definition 1.1 The set function Pr is called a probability measure if it satisfies the normality, nonnegativity, and countable additivity axioms.
Example 1.1: Let = {1 , 2 , }, and let A be the power set of .
Assume that p1 , p2 , are nonnegative numbers such that p1 + p2 + = 1.
Define a set function on A as
X
Pr{A} =
pi , A A.
(1.2)
i A

Then Pr is a probability measure.


Example 1.2: Let = [0, 1] and let A be the Borel algebra over . If Pr is
the Lebesgue measure, then Pr is a probability measure.
Theorem 1.1 Let be a nonempty set, A a -algebra over , and Pr a
probability measure. Then we have
(a) Pr{} = 0;
(b) Pr is self-dual, i.e., Pr{A} + Pr{Ac } = 1 for any A A;
(c) Pr is increasing, i.e., Pr{A} Pr{B} whenever A B.
Proof: (a) Since and are disjoint events and = , we have
Pr{} + Pr{} = Pr{} which makes Pr{} = 0.
(b) Since A and Ac are disjoint events and A Ac = , we have Pr{A} +
Pr{Ac } = Pr{} = 1.
(c) Since A B, we have B = A (B Ac ), where A and B Ac are
disjoint events. Therefore Pr{B} = Pr{A} + Pr{B Ac } Pr{A}.
Probability Continuity Theorem
Theorem 1.2 (Probability Continuity Theorem) Let be a nonempty set,
A a -algebra over , and Pr a probability measure. If A1 , A2 , A and
limi Ai exists, then
n
o
lim Pr{Ai } = Pr lim Ai .
(1.3)
i

Proof: Step 1: Suppose {Ai } is an increasing sequence. Write Ai A and


A0 = . Then {Ai \Ai1 } is a sequence of disjoint events and

[
i=1

(Ai \Ai1 ) = A,

k
[
i=1

(Ai \Ai1 ) = Ak

Section 1.1 - Probability Space

for k = 1, 2, Thus we have





S
P
Pr{A} = Pr
(Ai \Ai1 ) =
Pr {Ai \Ai1 }
i=1

= lim

i=1

k
P

k i=1

Pr {Ai \Ai1 } = lim Pr


k

k
S


(Ai \Ai1 )

i=1

= lim Pr{Ak }.
k

Step 2: If {Ai } is a decreasing sequence, then the sequence {A1 \Ai } is


clearly increasing. It follows that
n
o
Pr{A1 } Pr{A} = Pr lim (A1 \Ai ) = lim Pr {A1 \Ai }
i

= Pr{A1 } lim Pr{Ai }


i

which implies that Pr{Ai } Pr{A}.


Step 3: If {Ai } is a sequence of events such that Ai A, then for each
k, we have

[
\
Ai .
Ai Ak
i=k

i=k

Since Pr is increasing, we have


(
)
(
)
\
[
Pr
Ai Pr{Ak } Pr
Ai .
i=k

Note that

i=k

\
i=k

Ai A,

Ai A.

i=k

It follows from Steps 1 and 2 that Pr{Ai } Pr{A}.


Probability Space
Definition 1.2 Let be a nonempty set, A a -algebra over , and Pr a
probability measure. Then the triplet (, A, Pr) is called a probability space.
Example 1.3: Let = {1 , 2 , }, A the power set of , and Pr a
probability measure defined by (1.2). Then (, A, Pr) is a probability space.
Example 1.4: Let = [0, 1], A the Borel algebra over , and Pr the
Lebesgue measure. Then ([0, 1], A, Pr) is a probability space, and sometimes
is called Lebesgue unit interval. For many purposes it is sufficient to use it
as the basic probability space.

Chapter 1 - Probability Theory

Product Probability Space


Let (i , Ai , Pri ), i = 1, 2, , n be probability spaces, and = 1 2
n , A = A1 A2 An . Note that the probability measures
Pri , i = 1, 2, , n are finite. It follows from the product measure theorem
that there is a unique measure Pr on A such that
Pr{A1 A2 An } = Pr1 {A1 } Pr2 {A2 } Prn {An }
for any Ai Ai , i = 1, 2, , n. This conclusion is called the product probability theorem. The measure Pr is also a probability measure since
Pr{} = Pr1 {1 } Pr2 {2 } Prn {n } = 1.
Such a probability measure is called the product probability measure, denoted
by Pr = Pr1 Pr2 Prn .
Definition 1.3 Let (i , Ai , Pri ), i = 1, 2, , n be probability spaces, and
= 1 2 n , A = A1 A2 An , Pr = Pr1 Pr2 Prn .
Then the triplet (, A, Pr) is called the product probability space.
Infinite Product Probability Space
Let (i , Ai , Pri ), i = 1, 2, be an arbitrary sequence of probability spaces,
and
= 1 2 , A = A1 A2
(1.4)
It follows from the infinite product measure theorem that there is a unique
probability measure Pr on A such that
Pr {A1 An n+1 n+2 } = Pr1 {A1 } Prn {An }
for any measurable rectangle A1 An n+1 n+2 and all
n = 1, 2, The probability measure Pr is called the infinite product of
Pri , i = 1, 2, and is denoted by
Pr = Pr1 Pr2

(1.5)

Definition 1.4 Let (i , Ai , Pri ), i = 1, 2, be probability spaces, and =


1 2 , A = A1 A2 , Pr = Pr1 Pr2 Then the triplet
(, A, Pr) is called the infinite product probability space.

1.2

Random Variables

Definition 1.5 A random variable is a measurable function from a probability space (, A, Pr) to the set of real numbers, i.e., for any Borel set B of
real numbers, the set

{ B} = { () B}
(1.6)
is an event.

Section 1.2 - Random Variables

Example 1.5: Take (, A, Pr) to be {1 , 2 } with Pr{1 } = Pr{2 } = 0.5.


Then the function
(
0, if = 1
() =
1, if = 2
is a random variable.
Example 1.6: Take (, A, Pr) to be the interval [0, 1] with Borel algebra
and Lebesgue measure. We define as an identity function from to [0,1].
Since is a measurable function, it is a random variable.
Example 1.7: A deterministic number c may be regarded as a special random variable. In fact, it is the constant function () c on the probability
space (, A, Pr).
Definition 1.6 A random variable is said to be
(a) nonnegative if Pr{ < 0} = 0;
(b) positive if Pr{ 0} = 0;
(c) continuous if Pr{ = x} = 0 for each x <;
(d) simple if there exists a finite sequence {x1 , x2 , , xm } such that
Pr { 6= x1 , 6= x2 , , 6= xm } = 0;

(1.7)

(e) discrete if there exists a countable sequence {x1 , x2 , } such that


Pr { 6= x1 , 6= x2 , } = 0.

(1.8)

Definition 1.7 Let 1 and 2 be random variables defined on the probability


space (, A, Pr). We say 1 = 2 if 1 () = 2 () for almost all .
Random Vector
Definition 1.8 An n-dimensional random vector is a measurable function
from a probability space (, A, Pr) to the set of n-dimensional real vectors,
i.e., for any Borel set B of <n , the set



(1.9)
{ B} = () B
is an event.
Theorem 1.3 The vector (1 , 2 , , n ) is a random vector if and only if
1 , 2 , , n are random variables.
Proof: Write = (1 , 2 , , n ). Suppose that is a random vector on the
probability space (, A, Pr). For any Borel set B of <, the set B <n1 is
also a Borel set of <n . Thus we have





1 () B = 1 () B, 2 () <, , n () <



= () B <n1 A

Chapter 1 - Probability Theory

which implies that 1 is a random variable. A similar process may prove that
2 , 3 , , n are random variables.
Conversely, suppose that all 1 , 2 , , n are random variables on the
probability space (, A, Pr). We define



B = B <n { |() B} A .
The vector = (1 , 2 , , n ) is proved to be a random vector if we can
prove that B contains all Borel sets of <n . First, the class B contains all
open intervals of <n because
(
)
n
n
Y
\




()
(ai , bi ) =
i () (ai , bi ) A.
i=1

i=1

Next, the class B is a -algebra of <n because (i) we have <n B since
{ |() <n } = A; (ii) if B B, then { |() B} A, and


{ () B c } = { () B}c A
which implies that B c B; (iii) if Bi B for i = 1, 2, , then { |()
Bi } A and
(
)

[
[


()
Bi =
{ () Bi } A
i=1

i=1

which implies that i Bi B. Since the smallest -algebra containing all


open intervals of <n is just the Borel algebra of <n , the class B contains all
Borel sets of <n . The theorem is proved.
Random Arithmetic
In this subsections, we will suppose that all random variables are defined on a
common probability space. Otherwise, we may embed them into the product
probability space.
Definition 1.9 Let f : <n < be a measurable function, and 1 , 2 , , n
random variables defined on the probability space (, A, Pr). Then =
f (1 , 2 , , n ) is a random variable defined by
() = f (1 (), 2 (), , n ()),

(1.10)

Example 1.8: Let 1 and 2 be random variables on the probability space


(, A, Pr). Then their sum is
(1 + 2 )() = 1 () + 2 (),

Section 1.3 - Probability Distribution

and their product is


(1 2 )() = 1 () 2 (),

The reader may wonder whether (1 , 2 , , n ) defined by (1.9) is a


random variable. The following theorem answers this question.
Theorem 1.4 Let be an n-dimensional random vector, and f : <n < a
measurable function. Then f () is a random variable.
Proof: Assume that is a random vector on the probability space (, A, Pr).
For any Borel set B of <, since f is a measurable function, f 1 (B) is also a
Borel set of <n . Thus we have





f (()) B = () f 1 (B) A
which implies that f () is a random variable.

1.3

Probability Distribution

Definition 1.10 The probability distribution : < [0, 1] of a random


variable is defined by



(x) = Pr () x .
(1.11)
That is, (x) is the probability that the random variable takes a value less
than or equal to x.
Example 1.9: Assume that the random variables and have the same
probability distribution. One question is whether = or not. Generally speaking, it is not true. Take (, A, Pr) to be {1 , 2 } with Pr{1 } =
Pr{2 } = 0.5. We now define two random variables as follows,


1, if = 1
1, if = 1
() =
() =
1, if = 2 ,
1, if = 2 .
Then and have the same probability distribution,

0, if x < 1
0.5, if 1 x < 1
(x) =

1, if x 1.
However, it is clear that 6= in the sense of Definition 1.7.
Theorem 1.5 (Sufficient and Necessary Condition for Probability Distribution) A function : < [0, 1] is a probability distribution if and only if it is
an increasing and right-continuous function with
lim (x) = 0;

lim (x) = 1.

x+

(1.12)

Chapter 1 - Probability Theory

Proof: For any x, y < with x < y, we have


(y) (x) = Pr{x < y} 0.
Thus the probability distribution is increasing. Next, let {i } be a sequence
of positive numbers such that i 0 as i . Then, for every i 1, we
have
(x + i ) (x) = Pr{x < x + i }.
It follows from the probability continuity theorem that
lim (x + i ) (x) = Pr{} = 0.

Hence is a right-continuous function. Finally,


lim (x) = lim Pr{ x} = Pr{} = 0,

lim (x) = lim Pr{ x} = Pr{} = 1.

x+

x+

Conversely, it is known there is a unique probability measure Pr on the Borel


algebra over < such that Pr{(, x]} = (x) for all x <. Furthermore,
it is easy to verify that the random variable defined by (x) = x from the
probability space (<, A, Pr) to < has the probability distribution .
Remark 1.1: Theorem 1.5 states that the identity function is a universal
function for any probability distribution by defining an appropriate probability space. In fact, there is a universal probability space for any probability
distribution by defining an appropriate function.
Theorem 1.6 Let be a probability distribution. Then there is a random
variable on Lebesgue unit interval ([0, 1], A, Pr) whose probability distribution
is just .
Proof: Define () = sup{x|(x) } on the Lebesgue unit interval. Then
is a random variable whose probability distribution is just because




Pr{ y} = Pr sup {x|(x) } y = Pr (y) = (y)
for any y <.
Theorem 1.7 A random variable with probability distribution is
(a) nonnegative if and only if (x) = 0 for all x < 0;
(b) positive if and only if (x) = 0 for all x 0;
(c) simple if and only if is a simple function;
(d) discrete if and only if is a step function;
(e) continuous if and only if is a continuous function.

Section 1.3 - Probability Distribution

Proof: The parts (a), (b), (c) and (d) follow immediately from the definition.
Next we prove the part (e). If is a continuous random variable, then
Pr{ = x} = 0. It follows from the probability continuity theorem that
lim ((x) (y)) = lim Pr{y < x} = Pr{ = x} = 0
yx

yx

which proves the left-continuity of . Since a probability distribution is


always right-continuous, is continuous. Conversely, if is continuous,
then we immediately have Pr{ = x} = 0 for each x <.
Definition 1.11 A continuous random variable is said to be (a) singular if
its probability distribution is a singular function; (b) absolutely continuous if
its probability distribution is an absolutely continuous function.
Probability Density Function
Definition 1.12 The probability density function : < [0, +) of a random variable is a function such that
Z x
(x) =
(y)dy
(1.13)

holds for all x <, where is the probability distribution of the random
variable .
R +
Let : < [0, +) be a measurable function such that (x)dx = 1.
Then Ris the probability density function of some random variable because
x
(x) = (y)dy is an increasing and continuous function satisfying (1.12).
Theorem 1.8 Let be a random variable whose probability density function
exists. Then for any Borel set B of <, we have
Z
Pr{ B} =
(y)dy.
(1.14)
B

Proof: Let

C be the class of all subsets C of < for which the relation


Z
Pr{ C} =

(y)dy

(1.15)

holds. We will show that C contains all Borel sets of <. It follows from the
probability continuity theorem and relation (1.15) that C is a monotone class.
It is also clear that C contains all intervals of the form (, a], (a, b], (b, )
and since
Z a
Pr{ (, a]} = (a) =
(y)dy,

Z
Pr{ (b, +)} = (+) (b) =

(y)dy,
b

10

Chapter 1 - Probability Theory

Pr{ (a, b]} = (b) (a) =

(y)dy,
a

Z
Pr{ } = 0 =

(y)dy

where is the probability distribution of . Let F be the algebra consisting of


all finite unions of disjoint sets of the form (, a], (a, b], (b, ) and . Note
that for any disjoint sets C1 , C2 , , Cm of F and C = C1 C2 Cm ,
we have
Pr{ C} =

m
X

Pr{ Cj } =

j=1

m Z
X
j=1

Z
(y)dy =

Cj

(y)dy.
C

That is, C C. Hence we have F C. Since the smallest -algebra containing


F is just the Borel algebra of <, the monotone class theorem implies that C
contains all Borel sets of <.
Some Special Distributions
Uniform Distribution: A random variable has a uniform distribution if
its probability density function is defined by
1
, if a x b
ba
(x) =

0,
otherwise

(1.16)

denoted by U(a, b), where a and b are given real numbers with a < b.
Exponential Distribution: A random variable has an exponential distribution if its probability density function is defined by




1 exp x , if x 0

(x) =
(1.17)

0,
if x < 0
denoted by EX P(), where is a positive number.
Normal Distribution: A random variable has a normal distribution if
its probability density function is defined by
(x) =



1
(x )2
exp
,
2 2
2

x<

denoted by N (, 2 ), where and are real numbers.

(1.18)

11

Section 1.4 - Independence

Joint Probability Distribution


Definition 1.13 The joint probability distribution : <n [0, 1] of a random vector (1 , 2 , , n ) is defined by



(x1 , x2 , , xn ) = Pr 1 () x1 , 2 () x2 , , n () xn .
Definition 1.14 The joint probability density function : <n [0, +) of
a random vector (1 , 2 , , n ) is a function such that
Z xn
Z x1 Z x2

(y1 , y2 , , yn )dy1 dy2 dyn


(x1 , x2 , , xn ) =

holds for all (x1 , x2 , , xn ) <n , where is the probability distribution of


the random vector (1 , 2 , , n ).

1.4

Independence

Definition 1.15 The random variables 1 , 2 , , m are said to be independent if


(m
)
m
\
Y
Pr
{i Bi } =
Pr{i Bi }
(1.19)
i=1

i=1

for any Borel sets B1 , B2 , , Bm of <.


Theorem 1.9 Let 1 , 2 , , m be independent random variables, and f1 ,
f2 , , fn are measurable functions. Then f1 (1 ), f2 (2 ), , fm (m ) are independent random variables.
Proof: For any Borel sets of B1 , B2 , , Bm of <, we have
(m
)
(m
)
\
\
1
Pr
{fi (i ) Bi } = Pr
{i fi (Bi )}
i=1

m
Y

Pr{i fi1 (Bi )} =

i=1

i=1
m
Y

Pr{fi (i ) Bi }.

i=1

Thus f1 (1 ), f2 (2 ), , fm (m ) are independent random variables.


Theorem 1.10 Let i be random variables with probability distributions i ,
i = 1, 2, , m, respectively, and the probability distribution of the random
vector (1 , 2 , , m ). Then 1 , 2 , , m are independent if and only if
(x1 , x2 , , xm ) = 1 (x1 )2 (x2 ) m (xm )
for all (x1 , x2 , , xm ) <m .

(1.20)

12

Chapter 1 - Probability Theory

Proof: If 1 , 2 , , m are independent random variables, then we have


(x1 , x2 , , xm ) = Pr{1 x1 , 2 x2 , , m xm }
= Pr{1 x1 } Pr{2 x2 } Pr{m xm }
= 1 (x1 )2 (x2 ) m (xm )
for all (x1 , x2 , , xm ) <m .
Conversely, assume that (1.20) holds. Let x2 , x3 , , xm be fixed real
numbers, and C the class of all subsets C of < for which the relation
Pr{1 C, 2 x2 , , m xm } = Pr{1 C}

m
Y

Pr{i xi }

(1.21)

i=2

holds. We will show that C contains all Borel sets of <. It follows from
the probability continuity theorem and relation (1.21) that C is a monotone
class. It is also clear that C contains all intervals of the form (, a], (a, b],
(b, ) and . Let F be the algebra consisting of all finite unions of disjoint
sets of the form (, a], (a, b], (b, ) and . Note that for any disjoint sets
C1 , C2 , , Ck of F and C = C1 C2 Ck , we have
Pr{1 C, 2 x2 , , m xm }
m
P
=
Pr{1 Cj , 2 x2 , , m xm }
j=1

= Pr{1 C} Pr{2 x2 } Pr{m xm }.

That is, C C. Hence we have F C. Since the smallest -algebra containing


F is just the Borel algebra of <, the monotone class theorem implies that C
contains all Borel sets of <.
Applying the same reasoning to each i in turn, we obtain the independence of the random variables.
Theorem 1.11 Let i be random variables with probability density functions
i , i = 1, 2, , m, respectively, and the probability density function of the
random vector (1 , 2 , , m ). Then 1 , 2 , , m are independent if and
only if
(x1 , x2 , , xm ) = 1 (x1 )2 (x2 ) m (xm )
(1.22)
for almost all (x1 , x2 , , xm ) <m .
Proof: If (x1 , x2 , , xm ) = 1 (x1 )2 (x2 ) m (xm ) a.e., then we have
Z x1 Z x2
Z xm
(x1 , x2 , , xm ) =

(t1 , t2 , , tm )dt1 dt2 dtm

x1

x2

xm

Z x1

1 (t1 )2 (t2 ) m (tm )dt1 dt2 dtm

Z x2

xm

2 (t2 )dt2

1 (t1 )dt1

= 1 (x1 )2 (x2 ) m (xm )

m (tm )dtm

13

Section 1.5 - Identical Distribution

for all (x1 , x2 , , xm ) <m . Thus 1 , 2 , , m are independent. Conversely, if 1 , 2 , , m are independent, then for any (x1 , x2 , , xm ) <m ,
we have (x1 , x2 , , xm ) = 1 (x1 )2 (x2 ) m (xm ). Hence
Z

x1

x2

xm

1 (t1 )2 (t2 ) m (tm )dt1 dt2 dtm

(x1 , x2 , , xm ) =

which implies that (x1 , x2 , , xm ) = 1 (x1 )2 (x2 ) m (xm ) a.e.


Example 1.10: Let 1 , 2 , , m be independent random variables with
probability density functions 1 , 2 , , m , respectively, and f : <m
< a measurable function. Then for any Borel set B of real numbers, the
probability Pr{f (1 , 2 , , m ) B} is
ZZ
Z
1 (x1 )2 (x2 ) m (xm )dx1 dx2 dxm .

f (x1 ,x2 , ,xm )B

1.5

Identical Distribution

Definition 1.16 The random variables and are said to be identically


distributed if
Pr{ B} = Pr{ B}
(1.23)
for any Borel set B of <.
Theorem 1.12 The random variables and are identically distributed if
and only if they have the same probability distribution.
Proof: Let and be the probability distributions of and , respectively.
If and are identically distributed random variables, then, for any x <,
we have
(x) = Pr{ (, x]} = Pr{ (, x]} = (x).
Thus and have the same probability distribution.
Conversely, assume that and have the same probability distribution.
Let C be the class of all subsets C of < for which the relation
Pr{ C} = Pr{ C}

(1.24)

holds. We will show that C contains all Borel sets of <. It follows from
the probability continuity theorem and relation (1.24) that C is a monotone
class. It is also clear that C contains all intervals of the form (, a], (a, b],
(b, ) and since and have the same probability distribution. Let F be
the algebra consisting of all finite unions of disjoint sets of the form (, a],

14

Chapter 1 - Probability Theory

(a, b], (b, ) and . Note that for any disjoint sets C1 , C2 , , Ck of
C = C1 C2 Ck , we have
Pr{ C} =

k
X

Pr{ Cj } =

j=1

k
X

F and

Pr{ Cj } = Pr{ C}.

j=1

That is, C C. Hence we have F C. Since the smallest -algebra containing


F is just the Borel algebra of <, the monotone class theorem implies that C
contains all Borel sets of <.
Theorem 1.13 Let and be two random variables whose probability density functions exist. Then and are identically distributed if and only if
they have the same probability density function.
Proof: It follows from Theorem 1.12 that the random variables and are
identically distributed if and only if they have the same probability distribution, if and only if they have the same probability density function.

1.6

Expected Value

Definition 1.17 Let be a random variable. Then the expected value of


is defined by
+

Pr{ r}dr

E[] =

Pr{ r}dr

(1.25)

provided that at least one of the two integrals is finite.


Example 1.11: Let be a uniformly distributed random variable U(a, b).
If a 0, then Pr{ r} 0 when r 0, and

1,
if r a

(b r)/(b a), if a r b
Pr{ r} =

0,
if r b,
Z
E[] =

Z
1dr +

br
dr +
ba

0dr
b

0dr =

If b 0, then Pr{ r} 0 when r 0, and

1,
if r b

(r a)/(b a), if a r b
Pr{ r} =

0,
if r a,

a+b
.
2

15

Section 1.6 - Expected Value


+

0dr

E[] =

ra
dr +
ba

0dr +

1dr

a+b
.
2

If a < 0 < b, then


(

(b r)/(b a), if 0 r b
0,
if r b,

0,
if r a
(r a)/(b a), if a r 0,

Pr{ r} =

Pr{ r} =

Z
E[] =
0

br
dr +
ba

Z

0dr

Z
0dr +

ra
dr
ba


=

a+b
.
2

Thus we always have the expected value (a + b)/2.


Example 1.12: Let be an exponentially distributed random variable
EX P(). Then its expected value is .
Example 1.13: Let be a normally distributed random variable N (, 2 ).
Then its expected value is .
Example 1.14: Assume that is a discrete random variable taking values
xi with probabilities pi , i = 1, 2, , m, respectively. It follows from the
definition of expected value operator that

E[] =

m
X

pi xi .

i=1

Theorem 1.14 Let be a random variable whose probability density function


exists. If the Lebesgue integral
Z

x(x)dx

is finite, then we have


Z

E[] =

x(x)dx.

(1.26)

16

Chapter 1 - Probability Theory

Proof: It follows from Definition 1.17 and Fubini Theorem that


Z 0
Z +
Pr{ r}dr
Pr{ r}dr
E[] =

0
+

Z

=
+ Z

Z

Z

(x)dr dx

=
+

Z
=

Z
x(x)dx +

x(x)dx


(x)dx dr

(x)dr dx


Z
(x)dx dr

x(x)dx.

The theorem is proved.


Theorem 1.15 Let be a random variable with probability distribution .
If the Lebesgue-Stieltjes integral
Z +
xd(x)

is finite, then we have


Z

E[] =

xd(x).

(1.27)

R +
Proof: Since the Lebesgue-Stieltjes integral xd(x) is finite, we immediately have
Z y
Z +
Z 0
Z 0
lim
xd(x) =
xd(x),
lim
xd(x) =
xd(x)
y+

and
Z

lim

y+

Z
xd(x) = 0,

lim

xd(x) = 0.

It follows from


Z +
xd(x) y
lim (z) (y) = y(1 (y)) 0,
z+



xd(x) y (y) lim (z) = y(y) 0,
z

that
lim y (1 (y)) = 0,

y+

lim y(y) = 0.

if y > 0,

if y < 0

17

Section 1.6 - Expected Value

Let 0 = x0 < x1 < x2 < < xn = y be a partition of [0, y]. Then we have
n1
X

xi ((xi+1 ) (xi ))

xd(x)
0

i=0

and

n1
X

Z
(1 (xi+1 ))(xi+1 xi )

Pr{ r}dr
0

i=0

as max{|xi+1 xi | : i = 0, 1, , n 1} 0. Since
n1
X

xi ((xi+1 ) (xi ))

i=0

n1
X

(1 (xi+1 ))(xi+1 xi ) = y((y) 1) 0

i=0

as y +. This fact implies that


Z +
Z
Pr{ r}dr =
0

xd(x).

A similar way may prove that


Z 0
Z

Pr{ r}dr =

xd(x).

Thus (1.27) is verified by the above two equations.


Linearity of Expected Value Operator
Theorem 1.16 Let and be random variables with finite expected values.
Then for any numbers a and b, we have
E[a + b] = aE[] + bE[].

(1.28)

Proof: Step 1: We first prove that E[ + b] = E[] + b for any real number
b. When b 0, we have
Z
Z 0
E[ + b] =
Pr{ + b r}dr
Pr{ + b r}dr

Z
Pr{ r b}dr

Pr{ r b}dr

(Pr{ r b} + Pr{ < r b}) dr

= E[] +
0

= E[] + b.
If b < 0, then we have
Z
E[ + b] = E[]

(Pr{ r b} + Pr{ < r b}) dr = E[] + b.


b

18

Chapter 1 - Probability Theory

Step 2: We prove that E[a] = aE[] for any real number a. If a = 0,


then the equation E[a] = aE[] holds trivially. If a > 0, we have

Pr{a r}dr

E[a] =

Pr{a r}dr

Z 0

n
n
ro
ro
Pr
dr
dr
Pr
a
a

0
Z n
Z 0
n
ro r
ro r
=a
Pr
d
a
Pr
d
a
a
a
a
0

= aE[].
If a < 0, we have

Pr{a r}dr

E[a] =
Z

Pr{a r}dr

Z 0

n
n
ro
ro
Pr
dr
dr
Pr
a
a
0

Z n
Z 0
n
ro r
ro r
=a
Pr
d
a
Pr
d
a
a
a
a
0

= aE[].
Step 3: We prove that E[ + ] = E[] + E[] when both and
are nonnegative simple random variables taking values a1 , a2 , , am and
b1 , b2 , , bn , respectively. Then + is also a nonnegative simple random
variable taking values ai + bj , i = 1, 2, , m, j = 1, 2, , n. Thus we have
E[ + ] =

m P
n
P

i=1 j=1
m P
n
P

i=1 j=1
m
P

n
P

i=1

j=1

(ai + bj ) Pr{ = ai , = bj }
ai Pr{ = ai , = bj } +

ai Pr{ = ai } +

m P
n
P

bj Pr{ = ai , = bj }

i=1 j=1

bj Pr{ = bj }

= E[] + E[].
Step 4: We prove that E[ + ] = E[] + E[] when both and are
nonnegative random variables. For every i 1 and every , we define

k 1 , if k 1 () < k , k = 1, 2, , i2i
2i
2i
2i
i () =

i,
if i (),

19

Section 1.6 - Expected Value

k 1 , if k 1 () < k , k = 1, 2, , i2i
2i
2i
2i
i () =

i,
if i ().
Then {i }, {i } and {i + i } are three sequences of nonnegative simple
random variables such that i , i and i + i + as i . Note
that the functions Pr{i > r}, Pr{i > r}, Pr{i + i > r}, i = 1, 2, are
also simple. It follows from the probability continuity theorem that
Pr{i > r} Pr{ > r}, r 0
as i . Since the expected value E[] exists, we have
Z

Pr{i > r}dr

E[i ] =

Pr{ > r}dr = E[]


0

as i . Similarly, we may prove that E[i ] E[] and E[i +i ] E[+]


as i . It follows from Step 3 that E[ + ] = E[] + E[].
Step 5: We prove that E[ + ] = E[] + E[] when and are arbitrary
random variables. Define
(
(
(), if () i
(), if () i
i () =
i () =
i, otherwise,
i, otherwise.
Since the expected values E[] and E[] are finite, we have
lim E[i ] = E[],

lim E[i ] = E[],

lim E[i + i ] = E[ + ].

Note that (i + i) and (i + i) are nonnegative random variables. It follows


from Steps 1 and 4 that
E[ + ] = lim E[i + i ]
i

= lim (E[(i + i) + (i + i)] 2i)


i

= lim (E[i + i] + E[i + i] 2i)


i

= lim (E[i ] + i + E[i ] + i 2i)


i

= lim E[i ] + lim E[i ]


i

= E[] + E[].
Step 6: The linearity E[a + b] = aE[] + bE[] follows immediately
from Steps 2 and 5. The theorem is proved.

20

Chapter 1 - Probability Theory

Product of Independent Random Variables


Theorem 1.17 Let and be independent random variables with finite expected values. Then the expected value of exists and
E[] = E[]E[].

(1.29)

Proof: Step 1: We first prove the case where both and are nonnegative simple random variables taking values a1 , a2 , , am and b1 , b2 , , bn ,
respectively. Then is also a nonnegative simple random variable taking
values ai bj , i = 1, 2, , m, j = 1, 2, , n. It follows from the independence
of and that
E[] =

m P
n
P

i=1 j=1
m P
n
P

ai bj Pr{ = ai , = bj }
ai bj Pr{ = ai } Pr{ = bj }

i=1 j=1

m
P


ai Pr{ = ai }

i=1

n
P

!
bj Pr{ = bj }

j=1

= E[]E[].
Step 2: Next we prove the case where and are nonnegative random
variables. For every i 1 and every , we define

k 1 , if k 1 () < k , k = 1, 2, , i2i
2i
2i
2i
i () =

i,
if i (),

k 1 , if k 1 () < k , k = 1, 2, , i2i
2i
2i
2i
i () =

i,
if i ().
Then {i }, {i } and {i i } are three sequences of nonnegative simple random
variables such that i , i and i i as i . It follows from
the independence of and that i and i are independent. Hence we have
E[i i ] = E[i ]E[i ] for i = 1, 2, It follows from the probability continuity
theorem that Pr{i > r}, i = 1, 2, are simple functions such that
Pr{i > r} Pr{ > r}, for all r 0
as i . Since the expected value E[] exists, we have
Z +
Z +
E[i ] =
Pr{i > r}dr
Pr{ > r}dr = E[]
0

as i . Similarly, we may prove that E[i ] E[] and E[i i ] E[]


as i . Therefore E[] = E[]E[].

21

Section 1.6 - Expected Value

Step 3: Finally, if and are arbitrary independent random variables,


then the nonnegative random variables + and + are independent and so
are + and , and + , and . Thus we have
E[ + + ] = E[ + ]E[ + ],

E[ + ] = E[ + ]E[ ],

E[ + ] = E[ ]E[ + ],

E[ ] = E[ ]E[ ].

It follows that
E[] = E[( + )( + )]
= E[ + + ] E[ + ] E[ + ] + E[ ]
= E[ + ]E[ + ] E[ + ]E[ ] E[ ]E[ + ] + E[ ]E[ ]
= (E[ + ] E[ ]) (E[ + ] E[ ])
= E[ + ]E[ + ]
= E[]E[]
which proves the theorem.
Expected Value of Function of Random Variable
Theorem 1.18 Let be a random variable with probability distribution ,
and f : < < a measurable function. If the Lebesgue-Stieltjes integral
Z +
f (x)d(x)

is finite, then we have


Z

E[f ()] =

f (x)d(x).

(1.30)

Proof: It follows from the definition of expected value operator that


Z +
Z 0
E[f ()] =
Pr{f () r}dr
Pr{f () r}dr.
(1.31)

If f is a nonnegative simple measurable

a1 ,

a2 ,
f (x) =

am ,

function, i.e.,
if x B1
if x B2
if x Bm

where B1 , B2 , , Bm are mutually disjoint Borel sets, then we have


Z +
m
X
E[f ()] =
Pr{f () r}dr =
ai Pr{ Bi }
=

0
m
X
i=1

i=1

Z
ai

d(x) =
Bi

f (x)d(x).

22

Chapter 1 - Probability Theory

We next prove the case where f is a nonnegative measurable function. Let


f1 , f2 , be a sequence of nonnegative simple functions such that fi f as
i . We have proved that
+

Pr{fi () r}dr =

E[fi ()] =

fi (x)d(x).

In addition, it is also known that Pr{fi () > r} Pr{f () > r} as i for


r 0. It follows from the monotone convergence theorem that
Z

E[f ()] =

Pr{f () > r}dr


0
+

Pr{fi () > r}dr

= lim

= lim

fi (x)d(x)

f (x)d(x).

Finally, if f is an arbitrary measurable function, then we have f = f + f


and
E[f ()] = E[f + () f ()] = E[f + ()] E[f ()]
Z +
Z +
+
=
f (x)d(x)
f (x)d(x)

Z
=

f (x)d(x).

The theorem is proved.


Sum of a Random Number of Random Variables
Theorem 1.19 (Wald Identity) Assume that {i } is a sequence of iid random variables, and is a positive random integer (i.e., a random variable
taking positive integer values) that is independent of the sequence {i }.
Then we have
" #
X
E
i = E[]E[1 ].
(1.32)
i=1

Proof: Since is independent of the sequence {i }, we have


(
)

X
X
Pr
i r =
Pr{ = k} Pr {1 + 2 + + k r} .
i=1

k=1

23

Section 1.7 - Variance

If i are nonnegative random variables, then we have


" # Z
(
)
+
X
X
E
i =
Pr
i r dr
0

i=1

Z
=

i=1

+ X

Pr{ = k} Pr {1 + 2 + + k r} dr

k=1

X
k=1

X
k=1

Pr {1 + 2 + + k r} dr

Pr{ = k}
0

Pr{ = k} (E[1 ] + E[2 ] + + E[k ])


Pr{ = k}kE[1 ]

(by iid hypothesis)

k=1

= E[]E[1 ].
If i are arbitrary random variables, then i = i+ i , and
" #
"
#
"
#

X
X
X
X
+

E
i = E
(i i ) = E
i
i
i=1

i=1

"
=E

i=1

"

i+ E

i=1

E[](E[1+ ]

i=1

#
i = E[]E[1+ ] E[]E[1 ]

i=1

E[1 ])

= E[]E[1+ 1 ] = E[]E[1 ].

The theorem is thus proved.

1.7

Variance

Definition 1.18 Let be a random variable with finite expected value e.


Then the variance of is defined by V [] = E[( e)2 ].
The variance of a random variable provides a measure of the spread of the
distribution around its expected value. A small value of variance indicates
that the random variable is tightly concentrated around its expected value;
and a large value of variance indicates that the random variable has a wide
spread around its expected value.
Example 1.15: Let be a uniformly distributed random variable U(a, b).
Then its expected value is (a + b)/2 and variance is (b a)2 /12.
Example 1.16: Let be an exponentially distributed random variable
EX P(). Then its expected value is and variance is 2 .

24

Chapter 1 - Probability Theory

Example 1.17: Let be a normally distributed random variable N (, 2 ).


Then its expected value is and variance is 2 .
Theorem 1.20 If is a random variable whose variance exists, a and b are
real numbers, then V [a + b] = a2 V [].
Proof: It follows from the definition of variance that


V [a + b] = E (a + b aE[] b)2 = a2 E[( E[])2 ] = a2 V [].
Theorem 1.21 Let be a random variable with expected value e. Then
V [] = 0 if and only if Pr{ = e} = 1.
Proof: If V [] = 0, then E[( e)2 ] = 0. Thus we have
Z

Pr{( e)2 r}dr = 0

0
2

which implies Pr{(e) r} = 0 for any r > 0. Hence we have Pr{(e)2 =


0} = 1, i.e., Pr{ = e} = 1.
Conversely, if Pr{ = e} = 1, then we have Pr{( e)2 = 0} = 1 and
Pr{( e)2 r} = 0 for any r > 0. Thus
Z
V [] =

Pr{( e)2 r}dr = 0.

Theorem 1.22 If 1 , 2 , , n are independent random variables with finite


expected values, then
V [1 + 2 + + n ] = V [1 ] + V [2 ] + + V [n ].

(1.33)

Proof: It follows from the definition of variance that


 n 


P
V
i = E (1 + 2 + + n E[1 ] E[2 ] E[n ])2
i=1

n
P

n1
n


P P
E (i E[i ])2 + 2
E [(i E[i ])(j E[j ])] .

i=1

i=1 j=i+1

Since 1 , 2 , , n are independent, E [(i E[i ])(j E[j ])] = 0 for all
i, j with i 6= j. Thus (1.33) holds.
Maximum Variance Theorem
Let be a random variable that takes values in [a, b], but whose probability
distribution is otherwise arbitrary. If its expected value is given, what is the
possible maximum variance? The maximum variance theorem will answer
this question, thus playing an important role in treating games against nature.

25

Section 1.8 - Moments

Theorem 1.23 (Edmundson-Madansky Inequality) Let f be a convex function on [a, b], and a random variable that takes values in [a, b] and has
expected value e. Then
E[f ()]

be
ea
f (a) +
f (b).
ba
ba

(1.34)

Proof: For each , we have a () b and


() =

() a
b ()
a+
b.
ba
ba

It follows from the convexity of f that


f (())

b ()
() a
f (a) +
f (b).
ba
ba

Taking expected values on both sides, we obtain (1.34).


Theorem 1.24 (Maximum Variance Theorem) Let be a random variable
that takes values in [a, b] and has expected value e. Then
V [] (e a)(b e)
and equality holds if the random variable is determined by

be

, if x = a

ba
Pr{ = x} =

e a , if x = b.
ba

(1.35)

(1.36)

Proof: It follows from Theorem 1.23 immediately by defining f (x) = (xe)2 .


It is also easy to verify that the random variable determined by (1.36) has
variance (e a)(b e). The theorem is proved.

1.8

Moments

Definition 1.19 Let be a random variable, and k a positive number. Then


(a) the expected value E[ k ] is called the kth moment;
(b) the expected value E[||k ] is called the kth absolute moment;
(c) the expected value E[( E[])k ] is called the kth central moment;
(d) the expected value E[| E[]|k ] is called the kth absolute central moment.
Note that the first central moment is always 0, the first moment is just
the expected value, and the second central moment is just the variance.
Theorem 1.25 Let be a nonnegative random variable, and k a positive
number. Then the k-th moment
Z +
k
E[ ] = k
rk1 Pr{ r}dr.
(1.37)
0

26

Chapter 1 - Probability Theory

Proof: It follows from the nonnegativity of that


Z
Z
Z
E[ k ] =
Pr{ k x}dx =
Pr{ r}drk = k
0

rk1 Pr{ r}dr.

The theorem is proved.


Theorem 1.26 Let be a random variable that takes values in [a, b] and has
expected value e. Then for any positive integer k, the kth absolute moment
and kth absolute central moment satisfy the following inequalities,
E[||k ]
E[| e|k ]

be k ea k
|a| +
|b| ,
ba
ba

be
ea
(e a)k +
(b e)k .
ba
ba

(1.38)

(1.39)

Proof: It follows from Theorem 1.23 immediately by defining f (x) = |x|k


and f (x) = |x e|k .

1.9

Critical Values

Let be a random variable. In order to measure it, we may use its expected
value. Alternately, we may employ -optimistic value and -pessimistic value
as a ranking measure.
Definition 1.20 Let be a random variable, and (0, 1]. Then


sup () = sup r Pr { r}

(1.40)

is called the -optimistic value of , and




inf () = inf r Pr { r}

(1.41)

is called the -pessimistic value of .


This means that the random variable will reach upwards of the optimistic value sup () at least of time, and will be below the -pessimistic
value inf () at least of time. The optimistic value is also called percentile.
Theorem 1.27 Let be a random variable. Then we have
Pr{ sup ()} ,

Pr{ inf ()}

(1.42)

where sup () and inf () are the -optimistic and -pessimistic values of the
random variable , respectively.

27

Section 1.9 - Critical Values

Proof: It follows from the definition of the optimistic value that there exists
an increasing sequence {ri } such that Pr{ ri } and ri sup () as
i . Since {|() ri } {|() sup ()}, it follows from the
probability continuity theorem that
Pr{ sup ()} = lim Pr{ ri } .
i

The inequality Pr{ inf ()} may be proved similarly.


Example 1.18: Note that Pr{ sup ()} > and Pr{ inf ()} >
may hold. For example,
(
0 with probability 0.4
=
1 with probability 0.6.
If = 0.8, then sup (0.8) = 0 which makes Pr{ sup (0.8)} = 1 > 0.8. In
addition, inf (0.8) = 1 and Pr{ inf (0.8)} = 1 > 0.8.
Theorem 1.28 Let be a random variable. Then we have
(a) inf () is an increasing and left-continuous function of ;
(b) sup () is a decreasing and left-continuous function of .
Proof: (a) It is easy to prove that inf () is an increasing function of .
Next, we prove the left-continuity of inf () with respect to . Let {i } be
an arbitrary sequence of positive numbers such that i . Then {inf (i )}
is an increasing sequence. If the limitation is equal to inf (), then the leftcontinuity is proved. Otherwise, there exists a number z such that
lim inf (i ) < z < inf ().

Thus Pr{ z } i for each i. Letting i , we get Pr{ z } .


Hence z inf (). A contradiction proves the left-continuity of inf () with
respect to . The part (b) may be proved similarly.
Theorem 1.29 Let be a random variable. Then we have
(a) if > 0.5, then inf () sup ();
(b) if 0.5, then inf () sup ().
Proof: Part (a): Write () = (inf () + sup ())/2. If inf () < sup (),
then we have
1 Pr{ < ()} + Pr{ > ()} + > 1.
A contradiction proves inf () sup (). Part (b): Assume that inf () >
sup (). It follows from the definition of inf () that Pr{ ()} < .
Similarly, it follows from the definition of sup () that Pr{ ()} < .
Thus
1 Pr{ ()} + Pr{ ()} < + 1.
A contradiction proves inf () sup (). The theorem is proved.

28

Chapter 1 - Probability Theory

Theorem 1.30 Let be a random variable. Then we have


(a) if c 0, then (c)sup () = csup () and (c)inf () = cinf ();
(b) if c < 0, then (c)sup () = cinf () and (c)inf () = csup ().
Proof: (a) If c = 0, then it is obviously valid. When c > 0, we have
(c)sup () = sup {r | Pr{c r} }
= c sup {r/c | Pr { r/c} }
= csup ().
A similar way may prove that (c)inf () = cinf ().
(b) In order to prove this part, it suffices to verify that ()sup () =
inf () and ()inf () = sup (). In fact, for any (0, 1], we have
()sup () = sup{r | Pr{ r} }
= inf{r | Pr{ r} }
= inf ().
Similarly, we may prove that ()inf () = sup (). The theorem is proved.

1.10

Entropy

Given a random variable, what is the degree of difficulty of predicting the


specified value that the random variable will take? In order to answer this
question, Shannon [207] defined a concept of entropy as a measure of uncertainty.
Entropy of Discrete Random Variables
Definition 1.21 Let be a discrete random variable taking values xi with
probabilities pi , i = 1, 2, , respectively. Then its entropy is defined by
H[] =

pi ln pi .

(1.43)

i=1

It should be noticed that the entropy depends only on the number of


values and their probabilities and does not depend on the actual values that
the random variable takes.
Theorem 1.31 Let be a discrete random variable taking values xi with
probabilities pi , i = 1, 2, , respectively. Then
H[] 0

(1.44)

and equality holds if and only if there exists an index k such that pk = 1, i.e.,
is essentially a deterministic number.

29

Section 1.10 - Entropy

S(t)
....
........
........................
....
........
.......
...
......
.....
......
... .......
......
... ...
.....
.....
... ...
.....
.......
...
.
.
.
......................................................................................................................................................................
.....
..
.....
...
.....
...
....
....
...
....
...
....
....
...
....
...
....
....
...
...
...
...
...
....
...
.

Figure 1.1: Function S(t) = t ln t is concave


Proof: The nonnegativity is clear. In addition, H[] = 0 if and only if
pi = 0 or 1 for each i. That is, there exists one and only one index k such
that pk = 1. The theorem is proved.
This theorem states that the entropy of a discrete random variable reaches
its minimum 0 when the random variable degenerates to a deterministic number. In this case, there is no uncertainty.
Theorem 1.32 Let be a simple random variable taking values xi with probabilities pi , i = 1, 2, , n, respectively. Then
H[] ln n

(1.45)

and equality holds if and only if pi 1/n for all i = 1, 2, , n.


Proof: Since the function S(t) is a concave function of t and p1 + p2 + +
pn = 1, we have
!
!
n
n
n
X
1X
1X
pi ln
pi = ln n

pi ln pi n
n i=1
n i=1
i=1
which implies that H[] ln n and equality holds if and only if p1 = p2 =
= pn , i.e., pi 1/n for all i = 1, 2, , n.
This theorem states that the entropy of a simple random variable reaches
its maximum ln n when all outcomes are equiprobable. In this case, there is
no preference among all the values that the random variable will take.
Entropy of Absolutely Continuous Random Variables
Definition 1.22 Let be a random variable with probability density function
. Then its entropy is defined by
Z +
H[] =
(x) ln (x)dx.
(1.46)

30

Chapter 1 - Probability Theory

Example 1.19: Let be a uniformly distributed random variable on [a, b].


Then its entropy is H[] = ln(b a). This example shows that the entropy of
absolutely continuous random variable may assume both positive and negative values since ln(b a) < 0 if b a < 1; and ln(b a) > 0 if b a > 1.
Example 1.20: Let be an exponentially distributed random variable with
expected value . Then its entropy is H[] = 1 + ln .
Example 1.21: Let be a normally distributed random variable with
expected value e and variance 2 . Then its entropy is H[] = 1/2 + ln 2.
Theorem 1.33 Let be an absolutely continuous random variable. Then
H[a + b] = |a|H[] for any real numbers a and b.
Proof: It follows immediately from the definition.
Maximum Entropy Principle
Given some constraints, for example, expected value and variance, there are
usually multiple compatible probability distributions. For this case, we would
like to select the distribution that maximizes the value of entropy and satisfies
the prescribed constraints. This method is often referred to as the maximum
entropy principle (Jaynes [67]).
Example 1.22: Let be an absolutely continuous random variable on [a, b].
The maximum entropy principle attempts to find the probability density
function (x) that maximizes the entropy
Z

(x) ln (x)dx
a

subject to the natural constraint


Z
L=

Rb
a

(x)dx = 1. The Lagrangian is


!
Z
b

(x) ln (x)dx
a

(x)dx 1 .
a

It follows from the Euler-Lagrange equation that the maximum entropy probability density function meets
ln (x) + 1 + = 0
and has the form (x) = exp(1 ). Substituting it into the natural
constraint, we get
1
, axb
(x) =
ba
which is just the uniformly distributed random variable, and the maximum
entropy is H[ ] = ln(b a).

31

Section 1.10 - Entropy

Example 1.23: Let be an absolutely continuous random variable on [0, ).


Assume that the expected value of is prescribed to be . The maximum
entropy probability density function (x) should maximize the entropy
Z +

(x) ln (x)dx
0

subject to the constraints


Z +
(x)dx = 1,
0

x(x)dx = .
0

The Lagrangian is
Z
Z
(x) ln (x)dx 1
L=


Z
(x)dx 1 2


x(x)dx .

The maximum entropy probability density function meets Euler-Lagrange


equation
ln (x) + 1 + 1 + 2 x = 0
and has the form (x) = exp(1 1 2 x). Substituting it into the
constraints, we get


1
x
, x0
(x) = exp

which is just the exponentially distributed random variable, and the maximum entropy is H[ ] = 1 + ln .
Example 1.24: Let be an absolutely continuous random variable on
(, +). Assume that the expected value and variance of are prescribed
to be and 2 , respectively. The maximum entropy probability density
function (x) should maximize the entropy
Z +

(x) ln (x)dx

subject to the constraints


Z +
Z +
(x)dx = 1,
x(x)dx = ,


(x)dx 1

Z
2

(x )2 (x)dx = 2 .

The Lagrangian is
Z
Z +
L=
(x) ln (x)dx 1


Z
+
x(x)dx 3

(x ) (x)dx


.

32

Chapter 1 - Probability Theory

The maximum entropy probability density function meets Euler-Lagrange


equation
ln (x) + 1 + 1 + 2 x + 3 (x )2 = 0
and has the form (x) = exp(1 1 2 x 3 (x )2 ). Substituting it
into the constraints, we get


(x )2
1
, x<
(x) = exp
2 2
2
which is just the normally distributed
random variable, and the maximum

entropy is H[ ] = 1/2 + ln 2.

1.11

Distance

Distance is a powerful concept in many disciplines of science and engineering.


This section introduces the distance between random variables.
Definition 1.23 The distance between random variables and is defined
as
d(, ) = E[| |].
(1.47)
Theorem 1.34 Let , , be random variables, and let d(, ) be the distance.
Then we have
(a) (Nonnegativity) d(, ) 0;
(b) (Identification) d(, ) = 0 if and only if = ;
(c) (Symmetry) d(, ) = d(, );
(d) (Triangle Inequality) d(, ) d(, ) + d(, ).
Proof: The parts (a), (b) and (c) follow immediately from the definition.
The part (d) is proved by the following relation,
E[| |] E[| | + | |] = E[| |] + E[| |].

1.12

Inequalities

It is well-known that there are several inequalities in probability theory, such


as Markov inequality, Chebyshev inequality, Holders inequality, Minkowski
inequality, and Jensens inequality. They play an important role in both
theory and applications.
Theorem 1.35 Let be a random variable, and f a nonnegative measurable
function. If f is even (i.e., f (x) = f (x) for any x <) and increasing on
[0, ), then for any given number t > 0, we have
Pr{|| t}

E[f ()]
.
f (t)

(1.48)

33

Section 1.12 - Inequalities

Proof: It is clear that Pr{|| f 1 (r)} is a monotone decreasing function


of r on [0, ). It follows from the nonnegativity of f () that
Z

Pr{f () r}dr

E[f ()] =
0

Pr{|| f 1 (r)}dr

=
0

f (t)

Pr{|| f 1 (r)}dr

f (t)

dr Pr{|| f 1 (f (t))}

= f (t) Pr{|| t}
which proves the inequality.
Theorem 1.36 (Markov Inequality) Let be a random variable. Then for
any given numbers t > 0 and p > 0, we have
Pr{|| t}

E[||p ]
.
tp

(1.49)

Proof: It is a special case of Theorem 1.35 when f (x) = |x|p .


Example 1.25: For any given positive number t, we define a random variable
as follows,
(
0 with probability 1/2
=
t with probability 1/2.
Then Pr{ t} = 1/2 = E[||p ]/tp .
Theorem 1.37 (Chebyshev Inequality) Let be a random variable whose
variance V [] exists. Then for any given number t > 0, we have
Pr {| E[]| t}

V []
.
t2

(1.50)

Proof: It is a special case of Theorem 1.35 when the random variable is


replaced with E[] and f (x) = x2 .
Example 1.26: For any given positive number t, we define a random variable
as follows,
(
t with probability 1/2
=
t with probability 1/2.
Then Pr{| E[]| t} = 1 = V []/t2 .

34

Chapter 1 - Probability Theory

Theorem 1.38 (H
olders Inequality) Let p and q be two positive real numbers with 1/p+1/q = 1, and let and be random variables with E[||p ] <
and E[||q ] < . Then we have
E[||]

p
p

p
E[||p ] q E[||q ].

(1.51)

Proof: The inequality holds trivially if at least one of and is zero a.s. Now
we assumeE[||p ] > 0 and E[||q ] > 0. It is easy to prove that the function

f (x, y) = p x q y is a concave function on D = {(x, y) : x 0, y 0}. Thus


for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real numbers
a and b such that
f (x, y) f (x0 , y0 ) a(x x0 ) + b(y y0 ),

(x, y) D.

Letting x0 = E[||p ], y0 = E[||q ], x = ||p and y = ||q , we have


f (||p , ||q ) f (E[||p ], E[||q ]) a(||p E[||p ]) + b(||q E[||q ]).
Taking the expected values on both sides, we obtain
E[f (||p , ||q )] f (E[||p ], E[||q ]).
Hence the inequality (1.51) holds.
Theorem 1.39 (Minkowski Inequality) Let p be a real number with p 1,
and let and be random variables with E[||p ] < and E[||p ] < . Then
we have
p
p
p
p
E[| + |p ] p E[||p ] + p E[||p ].
(1.52)
Proof: The inequality holds trivially if at least one of and is zero a.s. Now
we assume
E[||p ] > 0 and E[||p ] > 0. It is easy to prove that the function

f (x, y) = ( p x + p y)p is a concave function on D = {(x, y) : x 0, y 0}.


Thus for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real
numbers a and b such that
f (x, y) f (x0 , y0 ) a(x x0 ) + b(y y0 ),

(x, y) D.

Letting x0 = E[||p ], y0 = E[||p ], x = ||p and y = ||p , we have


f (||p , ||p ) f (E[||p ], E[||p ]) a(||p E[||p ]) + b(||p E[||p ]).
Taking the expected values on both sides, we obtain
E[f (||p , ||p )] f (E[||p ], E[||p ]).
Hence the inequality (1.52) holds.

35

Section 1.13 - Convergence Concepts

Theorem 1.40 (Jensens Inequality) Let be a random variable, and f :


< < a convex function. If E[] and E[f ()] are finite, then
f (E[]) E[f ()].

(1.53)

Especially, when f (x) = |x|p and p 1, we have |E[]|p E[||p ].


Proof: Since f is a convex function, for each y, there exists a number k such
that f (x) f (y) k (x y). Replacing x with and y with E[], we obtain
f () f (E[]) k ( E[]).
Taking the expected values on both sides, we have
E[f ()] f (E[]) k (E[] E[]) = 0
which proves the inequality.

1.13

Convergence Concepts

There are four main types of convergence concepts of random sequence:


convergence almost surely (a.s.), convergence in probability, convergence in
mean, and convergence in distribution.
Table 1.1: Relations among Convergence Concepts
Convergence Almost Surely
Convergence in Mean

&
%

Convergence
in Probability

Convergence
in Distribution

Definition 1.24 Suppose that , 1 , 2 , are random variables defined on


the probability space (, A, Pr). The sequence {i } is said to be convergent
a.s. to if and only if there exists a set A A with Pr{A} = 1 such that
lim |i () ()| = 0

(1.54)

for every A. In that case we write i , a.s.


Definition 1.25 Suppose that , 1 , 2 , are random variables defined on
the probability space (, A, Pr). We say that the sequence {i } converges in
probability to if
lim Pr {|i | } = 0
(1.55)
i

for every > 0.

36

Chapter 1 - Probability Theory

Definition 1.26 Suppose that , 1 , 2 , are random variables with finite


expected values on the probability space (, A, Pr). We say that the sequence
{i } converges in mean to if
lim E[|i |] = 0.

(1.56)

In addition, the sequence {i } is said to converge in mean square to if


lim E[|i |2 ] = 0.

(1.57)

Definition 1.27 Suppose that , 1 , 2 , are the probability distributions


of random variables , 1 , 2 , , respectively. We say that {i } converges in
distribution to if i at any continuity point of .
Convergence Almost Surely vs. Convergence in Probability
Theorem 1.41 Suppose that , 1 , 2 , are random variables defined on
the probability space (, A, Pr). Then {i } converges a.s. to if and only if,
for every > 0, we have
(
)
[
lim Pr
{|i | } = 0.
(1.58)
n

i=n

Proof: For every i 1 and > 0, we define


n
o

X = lim i () 6= () ,
i



Xi () = |i () ()| .
It is clear that
X=

[
\

>0

n=1 i=n

!
Xi () .

Note that i , a.s. if and only if Pr{X} = 0. That is, i , a.s. if and
only if
)
(
\ [
Xi () = 0
Pr
n=1 i=n

for every > 0. Since

[
i=n

Xi ()

Xi (),

n=1 i=n

it follows from the probability continuity theorem that


(
)
(
)
[
\ [
lim Pr
Xi () = Pr
Xi () = 0.
n

The theorem is proved.

i=n

n=1 i=n

37

Section 1.13 - Convergence Concepts

Theorem 1.42 Suppose that , 1 , 2 , are random variables defined on


the probability space (, A, Pr). If {i } converges a.s. to , then {i } converges
in probability to .
Proof: It follows from the convergence a.s. and Theorem 1.41 that
(
)
[
lim Pr
{|i | } = 0
n

i=n

for each > 0. For every n 1, since


{|n | }

{|i | },

i=n

we have Pr{|n | } 0 as n . Hence the theorem holds.


Example 1.27: Convergence in probability does not imply convergence a.s.
For example, take (, A, Pr) to be the interval [0, 1] with Borel algebra and
Lebesgue measure. For any positive integer i, there is an integer j such that
i = 2j + k, where k is an integer between 0 and 2j 1. We define a random
variable on by
(
1, if k/2j (k + 1)/2j
i () =
0, otherwise
for i = 1, 2, and = 0. For any small number > 0, we have
Pr {|i | } =

1
0
2j

as i . That is, the sequence {i } converges in probability to . However,


for any [0, 1], there is an infinite number of intervals of the form [k/2j , (k+
1)/2j ] containing . Thus i () 6 0 as i . In other words, the sequence
{i } does not converge a.s. to .
Convergence in Probability vs. Convergence in Mean
Theorem 1.43 Suppose that , 1 , 2 , are random variables defined on
the probability space (, A, Pr). If the sequence {i } converges in mean to ,
then {i } converges in probability to .
Proof: It follows from the Markov inequality that, for any given number
> 0,
E[|i |]
Pr {|i | }
0

as i . Thus {i } converges in probability to .

38

Chapter 1 - Probability Theory

Example 1.28: Convergence in probability does not imply convergence in


mean. For example, take (, A, Pr) to be {1 , 2 , } with Pr{j } = 1/2j
for j = 1, 2, The random variables are defined by
 i
2 , if j = i
i {j } =
0, otherwise
for i = 1, 2, and = 0. For any small number > 0, we have
Pr {|i | } =

1
0.
2i

That is, the sequence {i } converges in probability to . However, we have


E [|i |] = 2i

1
= 1.
2i

That is, the sequence {i } does not converge in mean to .


Convergence Almost Surely vs. Convergence in Mean
Example 1.29: Convergence a.s. does not imply convergence in mean. For
example, take (, A, Pr) to be {1 , 2 , } with Pr{j } = 1/2j for j =
1, 2, The random variables are defined by
 i
2 , if j = i
i {j } =
0, otherwise
for i = 1, 2, and = 0. Then {i } converges a.s. to . However, the
sequence {i } does not converge in mean to .
Example 1.30: Convergence in mean does not imply convergence a.s. For
example, take (, A, Pr) to be the interval [0, 1] with Borel algebra and
Lebesgue measure. For any positive integer i, there is an integer j such
that i = 2j + k, where k is an integer between 0 and 2j 1. We define a
random variable on by
(
1, if k/2j (k + 1)/2j
i () =
0, otherwise
for i = 1, 2, and = 0. Then
E [|i |] =

1
0.
2j

That is, the sequence {i } converges in mean to . However, {i } does not


converge a.s. to .

Section 1.14 - Conditional Probability

39

Convergence in Probability vs. Convergence in Distribution


Theorem 1.44 Suppose that , 1 , 2 , are random variables defined on
the probability space (, A, Pr). If the sequence {i } converges in probability
to , then {i } converges in distribution to .
Proof: Let x be any given continuity point of the distribution . On the
one hand, for any y > x, we have
{i x} = {i x, y} {i x, > y} { y} {|i | y x}
which implies that
i (x) (y) + Pr{|i | y x}.
Since {i } converges in probability to , we have Pr{|i | y x} 0.
Thus we obtain lim supi i (x) (y) for any y > x. Letting y x, we
get
lim sup i (x) (x).
(1.59)
i

On the other hand, for any z < x, we have


{ z} = { z, i x} { z, i > x} {i x} {|i | x z}
which implies that
(z) i (x) + Pr{|i | x z}.
Since Pr{|i | x z} 0, we obtain (z) lim inf i i (x) for any
z < x. Letting z x, we get
(x) lim inf i (x).
i

(1.60)

It follows from (1.59) and (1.60) that i (x) (x). The theorem is proved.
Example 1.31: Convergence in distribution does not imply convergence
in probability. For example, take (, A, Pr) to be {1 , 2 } with Pr{1 } =
Pr{2 } = 0.5, and

1, if = 1
() =
1, if = 2 .
We also define i = for all i. Then i and are identically distributed.
Thus {i } converges in distribution to . But, for any small number > 0,
we have Pr{|i | > } = Pr{} = 1. That is, the sequence {i } does not
converge in probability to .

40

1.14

Chapter 1 - Probability Theory

Conditional Probability

We consider the probability of an event A after it has been learned that


some other event B has occurred. This new probability of A is called the
conditional probability of A given B.
Definition 1.28 Let (, A, Pr) be a probability space, and A, B
the conditional probability of A given B is defined by
Pr{A|B} =

Pr{A B}
Pr{B}

A.

Then

(1.61)

provided that Pr{B} > 0.


Theorem 1.45 Let (, A, Pr) be a probability space, and B an event with
Pr{B} > 0. Then Pr{|B} defined by (1.61) is a probability measure, and
(, A, Pr{|B}) is a probability space.
Proof: It is sufficient to prove that Pr{|B} satisfies the normality, nonnegativity and countable additivity axioms. At first, we have
Pr{|B} =

Pr{ B}
Pr{B}
=
= 1.
Pr{B}
Pr{B}

Secondly, for any A A, the set function Pr{A|B} is nonnegative. Finally,


for any countable sequence {Ai } of mutually disjoint events, we have
 


P
S
(
) Pr
Pr{Ai B} X
Ai B

[
i=1
= i=1
=
Pr{Ai |B}.
Pr
Ai |B =
Pr{B}
Pr{B}
i=1
i=1
Thus Pr{|B} is a probability measure. Furthermore, (, A, Pr{|B}) is a
probability space.
Theorem 1.46 (Bayes Formula) Let the events A1 , A2 , , An form a partition of the space such that Pr{Ai } > 0 for i = 1, 2, , n, and let B be
an event with Pr{B} > 0. Then we have
Pr{Ak } Pr{B|Ak }
Pr{Ak |B} = P
n
Pr{Ai } Pr{B|Ai }

(1.62)

i=1

for k = 1, 2, , n.
Proof: Since A1 , A2 , , An form a partition of the space , we have
Pr{B} =

n
X
i=1

Pr{Ai B} =

n
X
i=1

Pr{Ai } Pr{B|Ai }

41

Section 1.14 - Conditional Probability

which is also called the formula for total probability. Thus, for any k, we have
Pr{Ak |B} =

Pr{Ak B}
Pr{Ak } Pr{B|Ak }
= P
.
n
Pr{B}
Pr{Ai } Pr{B|Ai }
i=1

The theorem is proved.


Remark 1.2: Especially, let A and B be two events with Pr{A} > 0 and
Pr{B} > 0. Then A and Ac form a partition of the space , and the Bayes
formula is
Pr{A} Pr{B|A}
Pr{A|B} =
.
(1.63)
Pr{B}
Remark 1.3: In statistical applications, the events A1 , A2 , , An are often
called hypotheses. Furthermore, for each i, the Pr{Ai } is called the prior
probability of Ai , and Pr{Ai |B} is called the posterior probability of Ai after
the occurrence of event B.
Example 1.32: Let be an exponentially distributed random variable with
expected value . Then for any real numbers a > 0 and x > 0, the conditional
probability of a + x given a is
Pr{ a + x| a} = exp(x/) = Pr{ x}
which means that the conditional probability is identical to the original probability. This is the so-called memoryless property of exponential distribution.
In other words, it is as good as new if it is functioning on inspection.
Definition 1.29 The conditional probability distribution : < [0, 1] of a
random variable given B is defined by
(x|B) = Pr { x|B}

(1.64)

provided that Pr{B} > 0.


Example 1.33: Let and be random variables. Then the conditional
probability distribution of given = y is
(x| = y) = Pr { x| = y} =

Pr{ x, = y}
Pr{ = y}

provided that Pr{ = y} > 0.


Definition 1.30 The conditional probability density function of a random
variable given B is a nonnegative function such that
Z x
(x|B) =
(y|B)dy, x <
(1.65)

where (x|B) is the conditional probability distribution of given B.

42

Chapter 1 - Probability Theory

Example 1.34: Let (, ) be a random vector with joint probability density


function . Then the marginal probability density functions of and are
Z +
Z +
(x, y)dx,
(x, y)dy, g(y) =
f (x) =

respectively. Furthermore, we have


Z
Z x Z y
(r, t)drdt =
Pr{ x, y} =

Z


(r, t)
dr g(t)dt
g(t)

which implies that the conditional probability distribution of given = y


is
Z x
(r, y)
dr, a.s.
(1.66)
(x| = y) =
g(y)
and the conditional probability density function of given = y is
(x| = y) =

(x, y)
=Z
g(y)

(x, y)
+

a.s.

(1.67)

(x, y)dx

Note that (1.66) and (1.67) are defined only for g(y) 6= 0. In fact, the set
{y|g(y) = 0} has probability 0. Especially, if and are independent random
variables, then (x, y) = f (x)g(y) and (x| = y) = f (x).
Definition 1.31 Let be a random variable. Then the conditional expected
value of given B is defined by
Z +
Z 0
E[|B] =
Pr{ r|B}dr
Pr{ r|B}dr
(1.68)

provided that at least one of the two integrals is finite.


Following conditional probability and conditional expected value, we also
have conditional variance, conditional moments, conditional critical values,
conditional entropy as well as conditional convergence.
Definition 1.32 Let be a nonnegative random variable representing lifetime. Then the hazard rate (or failure rate) is

Pr{ x + > x}
h(x) = lim
.
(1.69)
0

The hazard rate tells us the probability of a failure just after time x when
it is functioning at time x. If has probability distribution and probability
density function , then the hazard rate
h(x) =

(x)
.
1 (x)

Example 1.35: Let be an exponentially distributed random variable with


expected value . Then its hazard rate h(x) 1/.

43

Section 1.15 - Stochastic Process

1.15

Stochastic Process

Definition 1.33 Let T be an index set and (, A, Pr) be a probability space.


A stochastic process is a measurable function from T (, A, Pr) to the set
of real numbers, i.e., for each t T and any Borel set B of real numbers, the
set

{ X(t, ) B}
(1.70)
is an event.
That is, a stochastic process Xt () is a function of two variables such
that the function Xt () is a random variable for each t . For each fixed ,
the function Xt ( ) is said to be a sample path of the stochastic process. A
stochastic process Xt () is said to be sample-continuous if the sample path
is continuous for almost all .
Definition 1.34 A stochastic process Xt is said to have independent increments if
Xt1 Xt0 , Xt2 Xt1 , , Xtk Xtk1
(1.71)
are independent random variables for any times t0 < t1 < < tk . A
stochastic process Xt is said to have stationary increments if, for any given
t > 0, the increments Xs+t Xs are identically distributed random variables
for all s > 0.
Renewal Process
Definition 1.35 Let 1 , 2 , be iid positive random variables. Define S0 =
0 and Sn = 1 + 2 + + n for n 1. Then the stochastic process


(1.72)
Nt = max n Sn t
n0

is called a renewal process.


If 1 , 2 , denote the interarrival times of successive events. Then Sn
can be regarded as the waiting time until the occurrence of the nth event,
and Nt is the number of renewals in (0, t]. Each sample path of Nt is a
right-continuous and increasing step function taking only nonnegative integer
values. Furthermore, the size of each jump of Nt is always 1. In other words,
Nt has at most one renewal at each time. In particular, Nt does not jump at
time 0. Since Nt n if and only if Sn t, we have
Pr{Nt n} = Pr{Sn t}.

(1.73)

Theorem 1.47 Let Nt be a renewal process. Then we have


E[Nt ] =

X
n=1

Pr{Sn t}.

(1.74)

44

Chapter 1 - Probability Theory

N. t
4
3
2
1
0

...
..........
...
..
...........
..............................
..
...
...
..
..
...
..........
.........................................................
..
...
..
..
....
..
..
..
..
..
..........
.......................................
..
...
..
..
..
...
..
..
...
..
..
..
..
...
..
.........................................................
.........
..
..
..
..
..
....
..
..
..
..
...
..
..
..
.
..
.
......................................................................................................................................................................................................................................
...
...
...
...
...
....
....
....
....
...
1 ...
2
3 ...
4
...
...
...
..
..
..
..
..

S0

S1

S2

S3

S4

Figure 1.2: A Sample Path of Renewal Process


Proof: Since Nt takes only nonnegative integer values, we have
Z
Z n
X
E[Nt ] =
Pr{Nt r}dr =
Pr{Nt r}dr
0

n=1

Pr{Nt n} =

n=1

n1

Pr{Sn t}.

n=1

The theorem is proved.


Example 1.36: A renewal process Nt is called a Poisson process with intensity if 1 , 2 , are iid exponentially distributed random variables with
expected value 1/. It has been proved that
Pr{Nt = n} = exp(t)

(t)n
,
n!

n = 0, 1, 2,

E[Nt ] = t.

(1.75)
(1.76)

Theorem 1.48 (Renewal Theorem) Let Nt be a renewal process with interarrival times 1 , 2 , Then we have
lim

E[Nt ]
1
=
.
t
E[1 ]

(1.77)

Brownian Motion
In 1828 the botanist Brown observed irregular movement of pollen suspended
in liquid. This movement is now known as Brownian motion. Bachelier used
Brownian motion as a model of stock prices in 1900. An equation for Brownian motion was obtained by Einstein in 1905, and a rigorous mathematical
definition of Brownian motion was given by Wiener in 1931. For this reason,
Brownian motion is also called Wiener process.

45

Section 1.15 - Stochastic Process

Definition 1.36 A stochastic process Bt is said to be a Brownian motion


(also called Wiener process) if
(i) B0 = 0 and Bt is sample-continuous,
(ii) Bt has stationary and independent increments,
(iii) every increment Bs+t Bs is a normally distributed random variable
with expected value et and variance 2 t.
The parameters e and are called the drift and diffusion coefficients,
respectively. The Brownian motion is said to be standard if e = 0 and = 1.
Any Brownian motion may be represented by et+Bt where Bt is a standard
Brownian motion.
B. t

....
.........
....
..
...
... ..... .....
...
... ... ........ ... ........
...
. ... ...... .... ....... .....
....
. ...
...
.
......
...
...
.......
..
...
...
..
.
...
...........
.
.
...
..
......
...
... ...
.
.
...
.
. .. ........ ...
.
.
.
.
...
.
.
.
.
... ....
.. ......... ... .... ......
...
.
........
.
.
....... ......... .....
.
...
..............
.
.. ... ......
..
...
.
... .... ...
... ...
...
.. .... ... .... ......... .................
... ..
... .. .. .. ... .. ..
...
........
......
...... ... ... ... ...
.... ... ... ..
...
.....
...
.. ............. .... ...
. ... ...
.
...
.
...
. . ..
...
.... ... ...
.
...
.
... ... ....
... .........
... ...
......
..
...............................................................................................................................................................................................................................................................

Figure 1.3: A Sample Path of Standard Brownian Motion

Theorem 1.49 (Existence Theorem) There is a Brownian motion.


Proof: Without loss of generality, we only prove that there is a standard
Brownian motion Bt on the range of t [0, 1]. First, let



(r) r represents rational numbers in [0, 1]
be a countable sequence of independently and normally distributed random
variables with expected value zero and variance one. For each integer n, we
define a stochastic process

 
k
X

i
k

, if t =
(k = 0, 1, , n)
n
n
n i=1
Xn (t) =

linear,
otherwise.
Since the limit
lim Xn (t)

exists almost surely, we may verify that the limit meets the conditions of
standard Brownian motion. Hence there is a standard Brownian motion.

46

Chapter 1 - Probability Theory

Remark 1.4: Suppose that Bt is a standard Brownian motion. It has been


proved that
X1 (t) = Bt ,
(1.78)
X2 (t) = aBt/a2 ,

(1.79)

X3 (t) = Bt+s Bs

(1.80)

are each a version of standard Brownian motion.


Almost all Brownian paths have an infinite variation and are differentiable
nowhere. Furthermore, the squared variation of Brownian motion on [0, t] is
equal to t both in mean square and almost surely.
On the one hand, almost all Brownian paths are infinitely long within any
time interval. On the other hand, it is impossible that a pollen moves at a
speed of infinity. Is it a dilemma?
Example 1.37: Let Bt be a Brownian motion with drift 0. Then for any
level x > 0 and any time t > 0, we have


Pr max Bs x = 2 Pr{Bt x}
(1.81)
0st

which is the so-called reflection principle. For any level x < 0 and any time
t > 0, we have


Pr min Bs x = 2 Pr{Bt x}.
(1.82)
0st

Example 1.38: Let Bt be a Brownian motion with drift e > 0 and diffusion
coefficient . Then the first passage time that the Brownian motion reaches
the barrier x > 0 has the probability density function


(x et)2
x
exp
, t>0
(1.83)
(t) =
2 2 t
2t3
whose expected value and variance are
E[ ] =

x
,
e

V [ ] =

x 2
.
e3

(1.84)

However, if the drift e = 0, then the expected value E[ ] is infinite.


Definition 1.37 Let Bt be a standard Brownian motion. Then et + Bt is
a Brownian motion, and the stochastic process
Gt = exp(et + Bt )
is called a geometric Brownian motion.

(1.85)

47

Section 1.16 - Stochastic Calculus

Geometric Brownian motion Gt is an important model for stock prices.


For each t > 0, the Gt has a lognormal distribution whose probability density
function is


(ln z et)2
1
exp
(z) =
, z0
(1.86)
2 2 t
2t
and has expected value and variance as follows,

E[Gt ] = exp et + 2 t/2 ,
V [Gt ] = exp(2et + 2 2 t) exp(2et + 2 t).
In addition, the first passage time that a geometric Brownian motion Gt
reaches the barrier x > 1 is just the time that the Brownian motion with
drift e and diffusion reaches ln x.

1.16

Stochastic Calculus

Let Bt be a standard Brownian motion, and dt an infinitesimal time interval.


Then
dBt = Bt+dt Bt
is a stochastic process such that, for each t, the dBt is a normally distributed
random variable with
E[dBt ] = 0,

V [dBt ] = dt,

E[dBt2 ] = dt,

V [dBt2 ] = 2dt2 .

Definition 1.38 Let Xt be a stochastic process and let Bt be a standard


Brownian motion. For any partition of closed interval [a, b] with a = t1 <
t2 < < tk+1 = b, the mesh is written as
= max |ti+1 ti |.
1ik

Then the Ito integral of Xt with respect to Bt is


Z

Xt dBt = lim

k
X

Xti (Bti+1 Bti )

(1.87)

i=1

provided that the limit exists in mean square and is a random variable.
Example 1.39: Let Bt be a standard Brownian motion. Then for any
partition 0 = t1 < t2 < < tk+1 = s, we have
Z

dBt = lim
0

k
X
i=1

(Bti+1 Bti ) Bs B0 = Bs .

48

Chapter 1 - Probability Theory

Example 1.40: Let Bt be a standard Brownian motion. Then for any


partition 0 = t1 < t2 < < tk+1 = s, we have
sBs =

k
X

ti+1 Bti+1 ti Bti

i=1

k
X

ti (Bti+1 Bti ) +

i=1
Z s

k
X

Bti+1 (ti+1 ti )

i=1

Bt dt

tdBt +
0

as 0. It follows that
Z

tdBt = sBs
0

Bt dt.
0

Example 1.41: Let Bt be a standard Brownian motion. Then for any


partition 0 = t1 < t2 < < tk+1 = s, we have
Bs2 =

k 
X

Bt2i+1 Bt2i

i=1

k
X

Bti+1 Bti

2

+2

i=1

k
X

Bti Bti+1 Bti

i=1
s

Z
s+2

Bt dBt
0

as 0. That is,
Z

Bt dBt =
0

1 2 1
B s.
2 s 2

Theorem 1.50 (Ito Formula) Let Bt be a standard Brownian motion, and


let h(t, b) be a twice continuously differentiable function. Define Xt = h(t, Bt ).
Then we have the following chain rule
dXt =

h
h
1 2h
(t, Bt )dt +
(t, Bt )dBt +
(t, Bt )dt.
t
b
2 b2

(1.88)

Proof: Since the function h is twice continuously differentiable, by using


Taylor series expansion, the infinitesimal increment of Xt has a second-order
approximation
Xt =

h
h
1 2h
(t, Bt )t +
(t, Bt )Bt +
(t, Bt )(Bt )2
t
b
2 b2
+

1 2h
2h
(t, Bt )(t)2 +
(t, Bt )tBt .
2
2 t
tb

49

Section 1.16 - Stochastic Calculus

Since we can ignore the terms (t)2 and tBt and replace (Bt )2 with
t, the Ito formula is obtained because it makes
Z
Z s
Z s
h
h
1 s 2h
(t, Bt )dt
Xs = X0 +
(t, Bt )dt +
(t, Bt )dBt +
2 0 b2
0 t
0 b
for any s 0.
Remark 1.5: The infinitesimal increment dBt in (1.88) may be replaced
with the Ito process
dYt = ut dt + vt dBt
(1.89)
where ut is an absolutely integrable stochastic process, and vt is a square
integrable stochastic process, thus producing
dh(t, Yt ) =

h
1 2h
h
(t, Yt )dt +
(t, Yt )dYt +
(t, Yt )vt2 dt.
t
b
2 b2

(1.90)

Remark 1.6: Assume that B1t , B2t , , Bmt are standard Brownian motions, and h(t, b1 , b2 , , bm ) is a twice continuously differentiable function.
Define
Xt = h(t, B1t , B2t , , Bmt ).
Then we have the following multi-dimensional Ito formula
dXt =

m
m
X
h
h
1 X 2h
dt +
dt.
dBit +
t
bi
2 i=1 b2i
i=1

(1.91)

Example 1.42: Ito formula is the chain rule for differentiation. Applying
Ito formula, we obtain
d(tBt ) = Bt dt + tdBt .
Hence we have
s

Z
sBs =

d(tBt ) =
0

That is,
Z

Bt dt +
0

tdBt .
0

Z
tdBt = sBs

Bt dt.

Example 1.43: Let Bt be a standard Brownian motion. By using Ito


formula
d(Bt2 ) = 2Bt dBt + dt,
we obtain
Bs2 =

Z
0

d(Bt2 ) = 2

Z
Bt dBt +

Z
dt = 2

Bt dBt + s.
0

50

Chapter 1 - Probability Theory

It follows that

1 2 1
B s.
2 s 2

Bt dBt =
0

Example 1.44: Let Bt be a standard Brownian motion. By using Ito


formula
d(Bt3 ) = 3Bt2 dBt + 3Bt dt,
we have
Bs3

d(Bt3 )

Bt2 dBt

=3

That is

Z
0

Bt2 dBt

Z
+3

Bt dt.
0

1
= Bs3
3

Bt dt.
0

Theorem 1.51 (Integration by Parts) Suppose that Bt is a standard Brownian motion and F (t) is an absolutely continuous function. Then
Z s
Z s
F (t)dBt = F (s)Bs
Bt dF (t).
(1.92)
0

Proof: By defining h(t, Bt ) = F (t)Bt and using the Ito formula, we get
d(F (t)Bt ) = Bt dF (t) + F (t)dBt .
Thus

Z
F (s)Bs =

Z
d(F (t)Bt ) =

Z
Bt dF (t) +

F (t)dBt
0

which is just (1.92).

1.17

Stochastic Differential Equation

This section introduces a type of stochastic differential equations driven by


Brownian motion.
Definition 1.39 Suppose Bt is a standard Brownian motion, and f and g
are some given functions. Then
dXt = f (t, Xt )dt + g(t, Xt )dBt

(1.93)

is called a stochastic differential equation. A solution is a stochastic process


Xt that satisfies (1.93) identically in t.
Remark 1.7: Note that there is no precise definition for the terms dXt , dt
and dBt in the stochastic differential equation (1.93). The mathematically
meaningful form is the stochastic integral equation
Z s
Z s
Xs = X0 +
f (t, Xt )dt +
g(t, Xt )dBt .
(1.94)
0

51

Section 1.17 - Stochastic Differential Equation

However, the differential form is convenient for us. This is the main reason
why we accept the differential form.
Example 1.45: Let Bt be a standard Brownian motion. Then the stochastic
differential equation
dXt = adt + bdBt
has a solution
Xt = at + bBt
which is just a Brownian motion with drift coefficient a and diffusion coefficient b.
Example 1.46: Let Bt be a standard Brownian motion. Then the stochastic
differential equation
dXt = aXt dt + bXt dBt
has a solution


Xt = exp

b2
2


t + bBt

which is just a geometric Brownian motion.


Example 1.47: Let Bt be a standard Brownian motion. Then the stochastic
differential equations

dXt = Xt dt Yt dBt
2

1
dY = Y dt + X dB
t
t
t
t
2
have a solution
(Xt , Yt ) = (cos Bt , sin Bt )
which is called a Brownian motion on unit circle since Xt2 + Yt2 1.

Chapter 2

Credibility Theory
The concept of fuzzy set was initiated by Zadeh [245] via membership function
in 1965. In order to measure a fuzzy event, Zadeh [248] proposed the concept
of possibility measure. Although possibility measure has been widely used,
it has no self-duality property. However, a self-dual measure is absolutely
needed in both theory and practice. In order to define a self-dual measure,
Liu and Liu [126] presented the concept of credibility measure. In addition,
a sufficient and necessary condition for credibility measure was given by Li
and Liu [100].
Credibility theory, founded by Liu [129] in 2004 and refined by Liu [132] in
2007, is a branch of mathematics for studying the behavior of fuzzy phenomena. The emphasis in this chapter is mainly on credibility measure, credibility space, fuzzy variable, membership function, credibility distribution, independence, identical distribution, expected value, variance, moments, critical
values, entropy, distance, convergence almost surely, convergence in credibility, convergence in mean, convergence in distribution, conditional credibility,
fuzzy process, fuzzy calculus, and fuzzy differential equation.

2.1

Credibility Space

Let be a nonempty set, and P the power set of (i.e., the larggest algebra over ). Each element in P is called an event. In order to present an
axiomatic definition of credibility, it is necessary to assign to each event A a
number Cr{A} which indicates the credibility that A will occur. In order to
ensure that the number Cr{A} has certain mathematical properties which we
intuitively expect a credibility to have, we accept the following four axioms:
Axiom 1. (Normality) Cr{} = 1.
Axiom 2. (Monotonicity) Cr{A} Cr{B} whenever A B.
Axiom 3. (Self-Duality) Cr{A} + Cr{Ac } = 1 for any event A.

54

Chapter 2 - Credibility Theory

Axiom 4. (Maximality) Cr {i Ai } = supi Cr{Ai } for any events {Ai } with


supi Cr{Ai } < 0.5.
Definition 2.1 (Liu and Liu [126]) The set function Cr is called a credibility measure if it satisfies the normality, monotonicity, self-duality, and
maximality axioms.
Example 2.1: Let = {1 , 2 }. For this case, there are only four events:
, {1 }, {2 }, . Define Cr{} = 0, Cr{1 } = 0.7, Cr{2 } = 0.3, and Cr{} =
1. Then the set function Cr is a credibility measure because it satisfies the
four axioms.
Example 2.2: Let be a nonempty set. Define Cr{} = 0, Cr{} = 1 and
Cr{A} = 1/2 for any subset A (excluding and ). Then the set function
Cr is a credibility measure.
Theorem 2.1 Let be a nonempty set, P the power set of , and Cr the
credibility measure. Then Cr{} = 0 and 0 Cr{A} 1 for any A P.
Proof: It follows from Axioms 1 and 3 that Cr{} = 1 Cr{} = 1 1 = 0.
Since A , we have 0 Cr{A} 1 by using Axiom 2.
Theorem 2.2 Let be a nonempty set, P the power set of , and Cr the
credibility measure. Then for any A, B P, we have
Cr{A B} = Cr{A} Cr{B} if Cr{A B} 0.5,

(2.1)

Cr{A B} = Cr{A} Cr{B} if Cr{A B} 0.5.

(2.2)

The above equations hold for not only finite number of events but also infinite
number of events.
Proof: If Cr{A B} < 0.5, then Cr{A} Cr{B} < 0.5 by using Axiom 2.
Thus the equation (2.1) follows immediately from Axiom 4. If Cr{A B} =
0.5 and (2.1) does not hold, then we have Cr{A} Cr{B} < 0.5. It follows
from Axiom 4 that
Cr{A B} = Cr{A} Cr{B} < 0.5.
A contradiction proves (2.1). Next we prove (2.2). Since Cr{A B} 0.5,
we have Cr{Ac B c } 0.5 by the self-duality. Thus
Cr{A B} = 1 Cr{Ac B c } = 1 Cr{Ac } Cr{B c } = Cr{A} Cr{B}.
The theorem is proved.

55

Section 2.1 - Credibility Space

Credibility Subadditivity Theorem


Theorem 2.3 (Liu [129], Credibility Subadditivity Theorem) The credibility
measure is subadditive. That is,
Cr{A B} Cr{A} + Cr{B}

(2.3)

for any events A and B. In fact, credibility measure is not only finitely
subadditive but also countably subadditive.
Proof: The argument breaks down into three cases.
Case 1: Cr{A} < 0.5 and Cr{B} < 0.5. It follows from Axiom 4 that
Cr{A B} = Cr{A} Cr{B} Cr{A} + Cr{B}.
Case 2: Cr{A} 0.5. For this case, by using Axioms 2 and 3, we have
Cr{Ac } 0.5 and Cr{A B} Cr{A} 0.5. Then
Cr{Ac } = Cr{Ac B} Cr{Ac B c }
Cr{Ac B} + Cr{Ac B c }
Cr{B} + Cr{Ac B c }.
Applying this inequality, we obtain
Cr{A} + Cr{B} = 1 Cr{Ac } + Cr{B}
1 Cr{B} Cr{Ac B c } + Cr{B}
= 1 Cr{Ac B c }
= Cr{A B}.
Case 3: Cr{B} 0.5. This case may be proved by a similar process of
Case 2. The theorem is proved.
Remark 2.1: For any events A and B, it follows from the credibility subadditivity theorem that the credibility measure is null-additive, i.e., Cr{A B} =
Cr{A} + Cr{B} if either Cr{A} = 0 or Cr{B} = 0.
Theorem 2.4 Let {Bi } be a decreasing sequence of events with Cr{Bi } 0
as i . Then for any event A, we have
lim Cr{A Bi } = lim Cr{A\Bi } = Cr{A}.

(2.4)

Proof: It follows from the monotonicity axiom and credibility subadditivity


theorem that
Cr{A} Cr{A Bi } Cr{A} + Cr{Bi }

56

Chapter 2 - Credibility Theory

for each i. Thus we get Cr{A Bi } Cr{A} by using Cr{Bi } 0. Since


(A\Bi ) A ((A\Bi ) Bi ), we have
Cr{A\Bi } Cr{A} Cr{A\Bi } + Cr{Bi }.
Hence Cr{A\Bi } Cr{A} by using Cr{Bi } 0.
Theorem 2.5 A credibility measure on is additive if and only if there are
at most two elements in taking nonzero credibility values.
Proof: Suppose that the credibility measure is additive. If there are more
than two elements taking nonzero credibility values, then we may choose three
elements 1 , 2 , 3 such that Cr{1 } Cr{2 } Cr{3 } > 0. If Cr{1 } 0.5,
it follows from Axioms 2 and 3 that
Cr{2 , 3 } Cr{ \ {1 }} = 1 Cr{1 } 0.5.
By using Axiom 4, we obtain
Cr{2 , 3 } = Cr{2 } Cr{3 } < Cr{2 } + Cr{3 }.
This is in contradiction with the additivity assumption. If Cr{1 } < 0.5,
then Cr{3 } Cr{2 } < 0.5. It follows from Axiom 4 that
Cr{2 , 3 } 0.5 = Cr{2 } Cr{3 } < 0.5
which implies that
Cr{2 , 3 } = Cr{2 } Cr{3 } < Cr{2 } + Cr{3 }.
This is also in contradiction with the additivity assumption. Hence there are
at most two elements taking nonzero credibility values.
Conversely, suppose that there are at most two elements, say 1 and 2 ,
taking nonzero credibility values. Let A and B be two disjoint events. The
argument breaks down into two cases.
Case 1: If either Cr{A} = 0 or Cr{B} = 0 is true, then we have Cr{A
B} = Cr{A} + Cr{B} by using the credibility subadditivity theorem.
Case 2: Cr{A} > 0 or Cr{B} > 0. For this case, without loss of generality,
we suppose that 1 A and 2 B. Note that Cr{(A B)c } = 0. It follows
from Axiom 3 and the credibility subadditivity theorem that
Cr{A B} = Cr{A B (A B)c } = Cr{} = 1,
Cr{A} + Cr{B} = Cr{A (A B)c } + Cr{B} = 1.
Hence Cr{A B} = Cr{A} + Cr{B}. The additivity is proved.
Remark 2.2: Theorem 2.5 states that a credibility measure is identical with
probability measure if there are effectively two elements in the universal set.

57

Section 2.1 - Credibility Space

Credibility Semicontinuity Law


Generally speaking, the credibility measure is neither lower semicontinuous
nor upper semicontinuous. However, we have the following credibility semicontinuity law.
Theorem 2.6 (Liu [129], Credibility Semicontinuity Law) For any events
A1 , A2 , , we have
n
o
lim Cr{Ai } = Cr lim Ai
(2.5)
i

if one of the following conditions is satisfied:


(a) Cr {A} 0.5 and Ai A;
(b) lim Cr{Ai } < 0.5 and Ai A;
i

(c) Cr {A} 0.5 and Ai A;

(d) lim Cr{Ai } > 0.5 and Ai A.


i

Proof: (a) Since Cr{A} 0.5, we have Cr{Ai } 0.5 for each i. It follows
from Axiom 4 that
Cr{A} = Cr {i Ai } = sup Cr{Ai } = lim Cr{Ai }.
i

(b) Since limi Cr{Ai } < 0.5, we have supi Cr{Ai } < 0.5. It follows
from Axiom 4 that
Cr{A} = Cr {i Ai } = sup Cr{Ai } = lim Cr{Ai }.
i

(c) Since Cr{A} 0.5 and Ai A, it follows from the self-duality of


credibility measure that Cr{Ac } 0.5 and Aci Ac . Thus Cr{Ai } = 1
Cr{Aci } 1 Cr{Ac } = Cr{A} as i .
(d) Since limi Cr{Ai } > 0.5 and Ai A, it follows from the self-duality
of credibility measure that
lim Cr{Aci } = lim (1 Cr{Ai }) < 0.5

and Aci Ac . Thus Cr{Ai } = 1 Cr{Aci } 1 Cr{Ac } = Cr{A} as i .


The theorem is proved.
Credibility Asymptotic Theorem
Theorem 2.7 (Credibility Asymptotic Theorem) For any events A1 , A2 , ,
we have
lim Cr{Ai } 0.5, if Ai ,
(2.6)
i

lim Cr{Ai } 0.5,

if Ai .

(2.7)

58

Chapter 2 - Credibility Theory

Proof: Assume Ai . If limi Cr{Ai } < 0.5, it follows from the credibility semicontinuity law that
Cr{} = lim Cr{Ai } < 0.5
i

which is in contradiction with Cr{} = 1. The first inequality is proved.


The second one may be verified similarly.
Credibility Extension Theorem
Suppose that the credibility of each singleton is given. Is the credibility
measure fully and uniquely determined? This subsection will answer the
question.
Theorem 2.8 Suppose that is a nonempty set. If Cr is a credibility measure, then we have
sup Cr{} 0.5,

Cr{ } + sup Cr{} = 1 if Cr{ } 0.5.

(2.8)

6=

We will call (2.8) the credibility extension condition.


Proof: If sup Cr{} < 0.5, then by using Axiom 4, we have
1 = Cr{} = sup Cr{} < 0.5.

This contradiction proves sup Cr{} 0.5. We suppose that is a point


with Cr{ } 0.5. It follows from Axioms 3 and 4 that Cr{ \ { }} 0.5,
and
Cr{ \ { }} = sup Cr{}.
6=

Hence the second formula of (2.8) is true by the self-duality of credibility


measure.
Theorem 2.9 (Li and Liu [100], Credibility Extension Theorem) Suppose
that is a nonempty set, and Cr{} is a nonnegative function on satisfying
the credibility extension condition (2.8). Then Cr{} has a unique extension
to a credibility measure as follows,

Cr{A} =

sup Cr{},
A

if sup Cr{} < 0.5


A

1 sup Cr{}, if sup Cr{} 0.5.


Ac

(2.9)

59

Section 2.1 - Credibility Space

Proof: We first prove that the set function Cr{A} defined by (2.9) is a
credibility measure.
Step 1: By the credibility extension condition sup Cr{} 0.5, we have

Cr{} = 1 sup Cr{} = 1 0 = 1.

Step 2: If A B, then B c Ac . The proof breaks down into two cases.


Case 1: sup Cr{} < 0.5. For this case, we have
A

Cr{A} = sup Cr{} sup Cr{} Cr{B}.


A

Case 2: sup Cr{} 0.5. For this case, we have sup Cr{} 0.5, and
A

Cr{A} = 1 sup Cr{} 1 sup Cr{} = Cr{B}.


Ac

B c

Step 3: In order to prove Cr{A} + Cr{Ac } = 1, the argument breaks


down into two cases.
Case 1: sup Cr{} < 0.5. For this case, we have sup Cr{} 0.5. Thus,
Ac

A
c

Cr{A} + Cr{A } = sup Cr{} + 1 sup Cr{} = 1.


A

Case 2: sup Cr{} 0.5. For this case, we have sup Cr{} 0.5, and
Ac

A
c

Cr{A} + Cr{A } = 1 sup Cr{} + sup Cr{} = 1.


Ac

Ac

Step 4: For any collection {Ai } with supi Cr{Ai } < 0.5, we have
Cr{i Ai } = sup Cr{} = sup sup Cr{} = sup Cr{Ai }.
i Ai

Ai

Thus Cr is a credibility measure because it satisfies the four axioms.


Finally, let us prove the uniqueness. Assume that Cr1 and Cr2 are two
credibility measures such that Cr1 {} = Cr2 {} for each . Let us prove
that Cr1 {A} = Cr2 {A} for any event A. The argument breaks down into
three cases.
Case 1: Cr1 {A} < 0.5. For this case, it follows from Axiom 4 that
Cr1 {A} = sup Cr1 {} = sup Cr2 {} = Cr2 {A}.
A

Case 2: Cr1 {A} > 0.5. For this case, we have Cr1 {Ac } < 0.5. It follows
from the first case that Cr1 {Ac } = Cr2 {Ac } which implies Cr1 {A} = Cr2 {A}.
Case 3: Cr1 {A} = 0.5. For this case, we have Cr1 {Ac } = 0.5, and
Cr2 {A} sup Cr2 {} = sup Cr1 {} = Cr1 {A} = 0.5,
A

Cr2 {A } sup Cr2 {} = sup Cr1 {} = Cr1 {Ac } = 0.5.


Ac

Ac

Hence Cr2 {A} = 0.5 = Cr1 {A}. The uniqueness is proved.

60

Chapter 2 - Credibility Theory

Credibility Space
Definition 2.2 Let be a nonempty set, P the power set of , and Cr a
credibility measure. Then the triplet (, P, Cr) is called a credibility space.
Example 2.3: The triplet (, P, Cr) is a credibility space if
= {1 , 2 , }, Cr{i } 1/2 for i = 1, 2,

(2.10)

Note that the credibility measure is produced by the credibility extension


theorem as follows,

0, if A =
1, if A =
Cr{A} =

1/2, otherwise.
Example 2.4: The triplet (, P, Cr) is a credibility space if
= {1 , 2 , }, Cr{i } = i/(2i + 1) for i = 1, 2,

(2.11)

By using the credibility extension theorem, we obtain the following credibility


measure,

sup
,
if A is finite

2i
+
1
i A
Cr{A} =
i

1 sup
, if A is infinite.
c
2i
+
1
i A
Example 2.5: The triplet (, P, Cr) is a credibility space if
= {1 , 2 , }, Cr{1 } = 1/2, Cr{i } = 1/i for i = 2, 3,

(2.12)

For this case, the credibility measure is

sup 1/i,
if A contains neither 1 nor 2

i A

1/2,
if A contains only one of 1 and 2
Cr{A} =

1 sup 1/i, if A contains both 1 and 2 .


i Ac

Example 2.6: The triplet (, P, Cr) is a credibility space if


= [0, 1],

Cr{} = /2 for .

For this case, the credibility measure is

sup ,
if sup < 1

2 A
A
Cr{A} =
1

1 sup , if sup = 1.
2 Ac
A

(2.13)

61

Section 2.1 - Credibility Space

Product Credibility Measure


Product credibility measure may be defined in multiple ways. This book
accepts the following axiom.
Axiom 5. (Product Credibility Axiom) Let k be nonempty sets on which
Crk are credibility measures, k = 1, 2, , n, respectively, and = 1 2
n . Then
Cr{(1 , 2 , , n )} = Cr1 {1 } Cr2 {2 } Crn {n }

(2.14)

for each (1 , 2 , , n ) .
Theorem 2.10 (Product Credibility Theorem) Let k be nonempty sets on
which Crk are the credibility measures, k = 1, 2, , n, respectively, and =
1 2 n . Then Cr = Cr1 Cr2 Crn defined by Axiom 5 has
a unique extension to a credibility measure on as follows,

sup
min Crk {k },

(1 ,2 ,n )A 1kn

if
sup
min Crk {k } < 0.5

(1 ,2 , ,n )A 1kn
(2.15)
Cr{A} =

min Crk {k },
1
sup

(1 ,2 , ,n )Ac 1kn

if
sup
min Crk {k } 0.5.

(1 ,2 , ,n )A 1kn

Proof: For each = (1 , 2 , , n ) , we have Cr{} = Cr1 {1 }


Cr2 {2 } Crn {n }. Let us prove that Cr{} satisfies the credibility
extension condition. Since sup Cr{k } 0.5 for each k, we have
k k

sup Cr{} =

sup

min Crk {k } 0.5.

(1 ,2 , ,n ) 1kn

Now we suppose that = (1 , 2 , , n ) is a point with Cr{ } 0.5.


Without loss of generality, let i be the index such that
Cr{ } = min Crk {k } = Cri {i }.
1kn

(2.16)

We also immediately have


Crk {k } 0.5,

k = 1, 2, , n;

(2.17)

Crk {k } + sup Crk {k } = 1,

k = 1, 2, , n;

(2.18)

sup Cri {i } sup Crk {k },

k = 1, 2, , n;

(2.19)

k 6=k

i 6=i

k 6=k

62

Chapter 2 - Credibility Theory

sup Crk {k } 0.5,

k = 1, , n.

(2.20)

k 6=k

It follows from (2.17) and (2.20) that


sup Cr{} =
6=

min Crk {k }

sup

) 1kn
(1 ,2 , ,n )6=(1 ,2 , ,n

sup min Crk {k } Cri {i }


i 6= 1ki1

min

i+1kn

Crk {k }

= sup Cri {i }.
i 6=i

We next suppose that


sup Cr{} > sup Cri {i }.

6=

i 6=i

Then there is a point (10 , 20 , , n0 ) 6= (1 , 2 , , n ) such that


min Crk {k0 } > sup Cri {i }.

1kn

i 6=i

Let j be one of the index such that j0 6= j . Then


Crj {j0 } > sup Cri {i }.
i 6=i

That is,
sup Crj {j } > sup Cri {i }

j 6=j

i 6=i

which is in contradiction with (2.19). Thus


sup Cr{} = sup Cri {i }.

6=

(2.21)

i 6=i

It follows from (2.16), (2.18) and (2.21) that


Cr{ } + sup Cr{} = Cri {i } + sup Cri {i } = 1.
6=

i 6=i

Thus Cr satisfies the credibility extension condition. It follows from the credibility extension theorem that Cr{A} is just the unique extension of Cr{}.
The theorem is proved.
Definition 2.3 Let (k , Pk , Crk ), k = 1, 2, , n be credibility spaces, =
1 2 n and Cr = Cr1 Cr2 Crn . Then (, P, Cr) is called
the product credibility space of (k , Pk , Crk ), k = 1, 2, , n.

63

Section 2.2 - Fuzzy Variables

Theorem 2.11 (Infinite Product Credibility Theorem) Suppose that k are


nonempty sets, Crk the credibility measures on Pk , k = 1, 2, , respectively.
Let = 1 2 Then

sup
inf Crk {k },

(1 ,2 , )A 1k<

if
sup
inf Crk {k } < 0.5

(1 ,2 , )A 1k<
Cr{A} =

1
sup
inf Crk {k },

(1 ,2 , )Ac 1k<

if
sup
inf Crk {k } 0.5

(1 ,2 , )A 1k<

is a credibility measure on

P.

Proof: Like Theorem 2.10 except Cr{(1 , 2 , )} = inf 1k< Crk {k }.


Definition 2.4 Let (k , Pk , Crk ), k = 1, 2, be credibility spaces. Define
= 1 2 and Cr = Cr1 Cr2 Then (, P, Cr) is called the
infinite product credibility space of (k , Pk , Crk ), k = 1, 2,

2.2

Fuzzy Variables

Definition 2.5 A fuzzy variable is a (measurable) function from a credibility


space (, P, Cr) to the set of real numbers.
Example 2.7: Take (, P, Cr) to be {1 , 2 } with Cr{1 } = Cr{2 } = 0.5.
Then the function
(
0, if = 1
() =
1, if = 2
is a fuzzy variable.
Example 2.8: Take (, P, Cr) to be the interval [0, 1] with Cr{} = /2 for
each [0, 1]. Then the identity function () = is a fuzzy variable.
Example 2.9: A crisp number c may be regarded as a special fuzzy variable.
In fact, it is the constant function () c on the credibility space (, P, Cr).
Remark 2.3: Since a fuzzy variable is a function on a credibility space,
for any set B of real numbers, the set



{ B} = () B
(2.22)
is always an element in P. In other words, the fuzzy variable is always a
measurable function and { B} is always an event.

64

Chapter 2 - Credibility Theory

Definition 2.6 A fuzzy variable is said to be


(a) nonnegative if Cr{ < 0} = 0;
(b) positive if Cr{ 0} = 0;
(c) continuous if Cr{ = x} is a continuous function of x;
(d) simple if there exists a finite sequence {x1 , x2 , , xm } such that
Cr { 6= x1 , 6= x2 , , 6= xm } = 0;

(2.23)

(e) discrete if there exists a countable sequence {x1 , x2 , } such that


Cr { 6= x1 , 6= x2 , } = 0.

(2.24)

Definition 2.7 Let 1 and 2 be fuzzy variables defined on the credibility


space (, P, Cr). We say 1 = 2 if 1 () = 2 () for almost all .
Fuzzy Vector
Definition 2.8 An n-dimensional fuzzy vector is defined as a function from
a credibility space (, P, Cr) to the set of n-dimensional real vectors.
Theorem 2.12 The vector (1 , 2 , , n ) is a fuzzy vector if and only if
1 , 2 , , n are fuzzy variables.
Proof: Write = (1 , 2 , , n ). Suppose that is a fuzzy vector. Then
1 , 2 , , n are functions from to <. Thus 1 , 2 , , n are fuzzy variables. Conversely, suppose that i are fuzzy variables defined on the credibility spaces (i , Pi , Cri ), i = 1, 2, , n, respectively. It is clear that
(1 , 2 , , n ) is a function from the product credibility space (, P, Cr)
to <n , i.e.,
(1 , 2 , , n ) = (1 (1 ), 2 (2 ), , n (n ))
for all (1 , 2 , , n ) . Hence = (1 , 2 , , n ) is a fuzzy vector.
Fuzzy Arithmetic
In this subsection, we will suppose that all fuzzy variables are defined on a
common credibility space. Otherwise, we may embed them into the product
credibility space.
Definition 2.9 Let f : <n < be a function, and 1 , 2 , , n fuzzy variables on the credibility space (, P, Cr). Then = f (1 , 2 , , n ) is a fuzzy
variable defined as
() = f (1 (), 2 (), , n ())
for any .

(2.25)

65

Section 2.3 - Membership Function

Example 2.10: Let 1 and 2 be fuzzy variables on the credibility space


(, P, Cr). Then their sum is
(1 + 2 )() = 1 () + 2 (),

and their product is


(1 2 )() = 1 () 2 (),

The reader may wonder whether (1 , 2 , , n ) defined by (2.25) is a


fuzzy variable. The following theorem answers this question.
Theorem 2.13 Let be an n-dimensional fuzzy vector, and f : <n < a
function. Then f () is a fuzzy variable.
Proof: Since f () is a function from a credibility space to the set of real
numbers, it is a fuzzy variable.

2.3

Membership Function

Definition 2.10 Let be a fuzzy variable defined on the credibility space


(, P, Cr). Then its membership function is derived from the credibility measure by
(x) = (2Cr{ = x}) 1, x <.
(2.26)
Membership function represents the degree that the fuzzy variable takes
some prescribed value. How do we determine membership functions? There
are several methods reported in the past literature. Anyway, the membership
degree (x) = 0 if x is an impossible point, and (x) = 1 if x is the most
possible point that takes.
Example 2.11: It is clear that a fuzzy variable has a unique membership
function. However, a membership function may produce multiple fuzzy variables. For example, let = {1 , 2 } and Cr{1 } = Cr{2 } = 0.5. Then
(, P, Cr) is a credibility space. We define


0, if = 1
1, if = 1
1 () =
2 () =
1, if = 2 ,
0, if = 2 .
It is clear that both of them are fuzzy variables and have the same membership function, (x) 1 on x = 0 or 1.
Theorem 2.14 (Credibility Inversion Theorem) Let be a fuzzy variable
with membership function . Then for any set B of real numbers, we have


1
Cr{ B} =
sup (x) + 1 sup (x) .
(2.27)
2 xB
xB c

66

Chapter 2 - Credibility Theory

Proof: If Cr{ B} 0.5, then by Axiom 2, we have Cr{ = x} 0.5 for


each x B. It follows from Axiom 4 that


1
1
Cr{ B} =
sup (2Cr{ = x} 1) = sup (x).
(2.28)
2 xB
2 xB
The self-duality of credibility measure implies that Cr{ B c } 0.5 and
supxB c Cr{ = x} 0.5, i.e.,
sup (x) = sup (2Cr{ = x} 1) = 1.
xB c

(2.29)

xB c

It follows from (2.28) and (2.29) that (2.27) holds.


If Cr{ B} 0.5, then Cr{ B c } 0.5. It follows from the first case
that


1
c
sup (x) + 1 sup (x)
Cr{ B} = 1 Cr{ B } = 1
2 xB c
xB


1
=
sup (x) + 1 sup (x) .
2 xB
xB c
The theorem is proved.
Example 2.12: Let be a fuzzy variable with membership function . Then
the following equations follow immediately from Theorem 2.14:
!
1
(x) + 1 sup (y) , x <;
(2.30)
Cr{ = x} =
2
y6=x
1
Cr{ x} =
2

1
Cr{ x} =
2


sup (y) + 1 sup (y) ,

x <;

(2.31)

x <.

(2.32)

y>x

yx


sup (y) + 1 sup (y) ,
y<x

yx

Especially, if is a continuous function, then


Cr{ = x} =

(x)
,
2

x <.

(2.33)

Theorem 2.15 (Sufficient and Necessary Condition for Membership Function) A function : < [0, 1] is a membership function if and only if
sup (x) = 1.
Proof: If is a membership function, then there exists a fuzzy variable
whose membership function is just , and
sup (x) = sup (2Cr{ = x}) 1.
x<

x<

67

Section 2.3 - Membership Function

If there is some point x < such that Cr{ = x} 0.5, then sup (x) = 1.
Otherwise, we have Cr{ = x} < 0.5 for each x <. It follows from Axiom 4
that
sup (x) = sup (2Cr{ = x}) 1 = 2 sup Cr{ = x} = 2 (Cr{} 0.5) = 1.
x<

x<

x<

Conversely, suppose that sup (x) = 1. For each x <, we define


!
1
Cr{x} =
(x) + 1 sup (y) .
2
y6=x
It is clear that

1
(1 + 1 1) = 0.5.
2

sup Cr{x}
x<

For any x < with Cr{x } 0.5, we have (x ) = 1 and


Cr{x } + sup Cr{y}
y6=x

1
+ sup

y6=x 2

1
=
2

(x ) + 1 sup (y)

= 1

1
1
sup (y) + sup (y) = 1.
2 y6=x
2 y6=x

y6=x

!
(y) + 1 sup (z)
z6=y

Thus Cr{x} satisfies the credibility extension condition, and has a unique
extension to credibility measure on P(<) by using the credibility extension
theorem. Now we define a fuzzy variable as an identity function from the
credibility space (<, P(<), Cr) to <. Then the membership function of the
fuzzy variable is
!
(2Cr{ = x}) 1 =

(x) + 1 sup (y)

1 = (x)

y6=x

for each x. The theorem is proved.


Remark 2.4: Theorem 2.15 states that the identity function is a universal
function for any fuzzy variable by defining an appropriate credibility space.
Theorem 2.16 A fuzzy variable with membership function is
(a) nonnegative if and only if (x) = 0 for all x < 0;
(b) positive if and only if (x) = 0 for all x 0;
(c) simple if and only if takes nonzero values at a finite number of points;
(d) discrete if and only if takes nonzero values at a countable set of points;
(e) continuous if and only if is a continuous function.
Proof: The theorem is obvious since the membership function (x) =
(2Cr{ = x}) 1 for each x <.

68

Chapter 2 - Credibility Theory

Some Special Membership Functions


By an equipossible fuzzy variable we mean the fuzzy variable fully determined
by the pair (a, b) of crisp numbers with a < b, whose membership function is
given by
(
1, if a x b
1 (x) =
0, otherwise.
By a triangular fuzzy variable we mean the fuzzy variable fully determined
by the triplet (a, b, c) of crisp numbers with a < b < c, whose membership
function is given by
xa

, if a x b

ba
xc
2 (x) =
, if b x c

bc

0,
otherwise.
By a trapezoidal fuzzy variable we mean the fuzzy variable fully determined
by the quadruplet (a, b, c, d) of crisp numbers with a < b < c < d, whose
membership function is given by

xa

b a , if a x b

1,
if b x c
3 (x) =

xd

, if c x d

cd

0,
otherwise.

3 (x)

2 (x)

1 (x)

..
..
..
.........
.........
.........
..
..
...
... . . . . ........................................................... . . . . . . . . . . . ...... . . . . . . . . . . ......... . . . . . . . . . . . . . . . . . . .... . . . . . . . . ............................................
...
...
...
...
.. ..
...
.
.
.
.. .. ....
.....
.
.
.
...
.
...
....
. ...
.
.
.. .
...
....
...
... .. .....
. ..
.
.
.. .
.
.
.
.
.
.
.
.
...
.
.
.
. .
.
.
.. . ...
..
. .....
.
.
....
....
...
.. .. ....
.. ..
. ..
.
.
.
.
.
.
...
.
.
.
.
...
. ...
.
.
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
...
. .
.
.
.
...
.. ..
..
. ....
.
.
.
....
....
...
...
..
.. .
. ..
.
.
.
.
.
.
.
.
.
.
.
.
...
...
...
.
.
.
.
.
.
.
.
.
..
.
.
.
.
....
...
.
.
.
.
.
...
...
.
.
.
..
.
.
.
.
.
.
.
...
.
.
.
.
...
...
.
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
...
.
...
.
.
.
.
.
.
.
..
.
.
..
.
.
....
...
.
.
.
.
.
. ...
.
................................................................................................................................
.............................................................................................................
..............................................................................................
....
....
...
....
....
....

a b

c d

Figure 2.1: Membership Functions 1 , 2 and 3


Joint Membership Function
Definition 2.11 If = (1 , 2 , , n ) is a fuzzy vector on the credibility
space (, P, Cr). Then its joint membership function is derived from the

69

Section 2.4 - Credibility Distribution

credibility measure by
(x) = (2Cr{ = x}) 1,

x <n .

(2.34)

Theorem 2.17 (Sufficient and Necessary Condition for Joint Membership


Function) A function : <n [0, 1] is a joint membership function if and
only if sup (x) = 1.

Proof: Like Theorem 2.15.

2.4

Credibility Distribution

Definition 2.12 (Liu [124]) The credibility distribution : < [0, 1] of a


fuzzy variable is defined by



(x) = Cr () x .

(2.35)

That is, (x) is the credibility that the fuzzy variable takes a value less than
or equal to x. Generally speaking, the credibility distribution is neither
left-continuous nor right-continuous.
Example 2.13: The credibility distribution of an equipossible fuzzy variable
(a, b) is

0, if x < a
1/2, if a x < b
1 (x) =

1, if x b.
Especially, if is an equipossible fuzzy variable on <, then 1 (x) 1/2.
Example 2.14: The credibility distribution
(a, b, c) is

0,
if

2(b a) , if
2 (x) =
x + c 2b

, if

2(c b)

1,
if

of a triangular fuzzy variable


xa
axb
bxc
x c.

70

Chapter 2 - Credibility Theory

Example 2.15: The credibility distribution of a trapezoidal fuzzy variable


(a, b, c, d) is

0,
if x a

,
if a x b

2(b a)

1
,
if b x c
3 (x) =
2

x + d 2c

, if c x d

2(d c)

1,
if x d.

1
0.5
0

3 (x)

2 (x)

1 (x)

....
....
....
.......
.......
........
... . . . . . . . . . . . . . ............................ . . . . . . . . . ...... . . . . . . . . . . . . . . . . . ............ . . . . . . . . . .... . . . . . . . . . . . . . . . . . . . . . . . ...........
.
.
...
.
...
.
...
.
.
....
....
...
....
...
.
.
... .
... .
...
...
...
.. .
.. .
.
.
.
.
.
.
.
...
.
.
.
.
.
....
....
...
.
... .
... .
.
.
.
...
...
...
...
...
.
.
.
..
..
...
...
...
.
.. . . . . .......................................... . . . . . . . . . . . . . . . . ..... . . . . . . . . . . . . ......... . . . .... . . . . . . . . . . . ... . . . . . . . . ...........................................
.
.
.. .
...
.
..
.
.
.
.
.
.
.
.
.
.
.
....
.
.
.
.
... .
.
.
.
.
.
..
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
.
.
...
. ..
.
.
.
.
.
.
....
...
.
.
.
.
.
.
.
.
...
.
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
...............................................................................................................................
................................................................................................................
................................................................................................
....
....
....
...
...
...

b c

a b

c d

Figure 2.2: Credibility Distributions 1 , 2 and 3


Theorem 2.18 Let be a fuzzy variable with membership function . Then
its credibility distribution is


1
(x) =
sup (y) + 1 sup (y) , x <.
(2.36)
2 yx
y>x
Proof: It follows from the credibility inversion theorem immediately.
Theorem 2.19 (Liu [129], Sufficient and Necessary Condition for Credibility Distribution) A function : < [0, 1] is a credibility distribution if and
only if it is an increasing function with
lim (x) 0.5 lim (x),

lim (y) = (x) if lim (y) > 0.5 or (x) 0.5.


yx

yx

(2.37)
(2.38)

Proof: It is obvious that a credibility distribution is an increasing function. The inequalities (2.37) follow from the credibility asymptotic theorem
immediately. Assume that x is a point at which limyx (y) > 0.5. That is,
lim Cr{ y} > 0.5.
yx

71

Section 2.4 - Credibility Distribution

Since { y} { x} as y x, it follows from the credibility semicontinuity


law that
(y) = Cr{ y} Cr{ x} = (x)
as y x. When x is a point at which (x) 0.5, if limyx (y) 6= (x), then
we have
lim (y) > (x) 0.5.
yx

For this case, we have proved that limyx (y) = (x). Thus (2.37) and
(2.38) are proved.
Conversely, if : < [0, 1] is an increasing function satisfying (2.37) and
(2.38), then

2(x),
if (x) < 0.5

1,
if lim (y) < 0.5 (x)
(x) =
(2.39)
yx

2 2(x), if 0.5 lim (y)


yx

takes values in [0, 1] and sup (x) = 1. It follows from Theorem 2.15 that
there is a fuzzy variable whose membership function is just . Let us verify
that is the credibility distribution of , i.e., Cr{ x} = (x) for each x.
The argument breaks down into two cases. (i) If (x) < 0.5, then we have
supy>x (y) = 1, and (y) = 2(y) for each y with y x. Thus
1
Cr{ x} =
2


sup (y) + 1 sup (y) = sup (y) = (x).
y>x

yx

yx

(ii) If (x) 0.5, then we have supyx (y) = 1 and (y) (x) 0.5 for
each y with y > x. Thus (y) = 2 2(y) and


1
Cr{ x} =
sup (y) + 1 sup (y)
2 yx
y>x


1
=
1 + 1 sup(2 2(y))
2
y>x
= inf (y) = lim (y) = (x).
y>x

yx

The theorem is proved.


Example 2.16: Let a and b be two numbers with 0 a 0.5 b 1. We
define a fuzzy variable by the following membership function,

if x < 0

2a,
1,
if x = 0
(x) =

2 2b, if x > 0.

72

Chapter 2 - Credibility Theory

Then its credibility distribution is


(
(x) =

a, if x < 0
b, if x 0.

Thus we have
lim (x) = a,

lim (x) = b.

x+

Theorem 2.20 A fuzzy variable with credibility distribution is


(a) nonnegative if and only if (x) = 0 for all x < 0;
(b) positive if and only if (x) = 0 for all x 0.
Proof: It follows immediately from the definition.
Theorem 2.21 Let be a fuzzy variable. Then we have
(a) if is simple, then its credibility distribution is a simple function;
(b) if is discrete, then its credibility distribution is a step function;
(c) if is continuous on the real line <, then its credibility distribution is a
continuous function.
Proof: The parts (a) and (b) follow immediately from the definition. The
part (c) follows from Theorem 2.18 and the continuity of the membership
function.
Example 2.17: However, the inverse of Theorem 2.21 is not true. For
example, let be a fuzzy variable whose membership function is
(
x, if 0 x 1
(x) =
1, otherwise.
Then its credibility distribution is (x) 0.5. It is clear that (x) is simple
and continuous. But the fuzzy variable is neither simple nor continuous.
Definition 2.13 A continuous fuzzy variable is said to be (a) singular if its
credibility distribution is a singular function; (b) absolutely continuous if its
credibility distribution is absolutely continuous.
Definition 2.14 (Liu [124]) The credibility density function : < [0, +)
of a fuzzy variable is a function such that
Z x
(x) =
(y)dy, x <,
(2.40)

(y)dy = 1

where is the credibility distribution of the fuzzy variable .

(2.41)

Section 2.4 - Credibility Distribution

73

Example 2.18: The credibility density


able (a, b, c) is

2(b
a)

1
(x) =
,

2(c b)

0,

function of a triangular fuzzy variif a x b


if b x c
otherwise.

Example 2.19: The credibility density function of a trapezoidal fuzzy variable (a, b, c, d) is

, if a x b

2(b
a)

1
(x) =
, if c x d

2(d c)

0,
otherwise.
Example 2.20: The credibility density function of an equipossible fuzzy
variable (a, b) does not exist.
Example 2.21: The credibility density function does not necessarily exist
even if the membership function is continuous and unimodal with a finite
support. Let f be the Cantor function, and set

if 0 x 1

f (x),
f (2 x), if 1 < x 2
(x) =
(2.42)

0,
otherwise.
Then is a continuous and unimodal function with (1) = 1. Hence
is a membership function. However, its credibility distribution is not an
absolutely continuous function. Thus the credibility density function does
not exist.
Theorem 2.22 Let be a fuzzy variable whose credibility density function
exists. Then we have
Z x
Z +
Cr{ x} =
(y)dy, Cr{ x} =
(y)dy.
(2.43)

Proof: The first part follows immediately from the definition. In addition,
by the self-duality of credibility measure, we have
Z +
Z x
Z +
Cr{ x} = 1 Cr{ < x} =
(y)dy
(y)dy =
(y)dy.

The theorem is proved.

74

Chapter 2 - Credibility Theory

Example 2.22: Different from the random case, generally speaking,


Z
Cr{a b} =
6

(y)dy.
a

Consider the trapezoidal fuzzy variable = (1, 2, 3, 4). Then Cr{2


3} = 0.5. However, it is obvious that (x) = 0 when 2 x 3 and
Z

(y)dy = 0 6= 0.5 = Cr{2 3}.


2

Joint Credibility Distribution


Definition 2.15 Let (1 , 2 , , n ) be a fuzzy vector. Then the joint credibility distribution : <n [0, 1] is defined by



(x1 , x2 , , xn ) = Cr 1 () x1 , 2 () x2 , , n () xn .
Definition 2.16 The joint credibility density function : <n [0, +) of
a fuzzy vector (1 , 2 , , n ) is a function such that
Z x1 Z x2
Z xn
(x1 , x2 , , xn ) =

(y1 , y2 , , yn )dy1 dy2 dyn

holds for all (x1 , x2 , , xn ) <n , and


Z

(y1 , y2 , , yn )dy1 dy2 dyn = 1

where is the joint credibility distribution of the fuzzy vector (1 , 2 , , n ).

2.5

Independence

The independence of fuzzy variables has been discussed by many authors from
different angles, for example, Zadeh [248], Nahmias [163], Yager [231], Liu
[129], Liu and Gao [148], and Li and Liu [99]. A lot of equivalence conditions
of independence are presented. Here we use the following condition.
Definition 2.17 (Liu and Gao [148]) The fuzzy variables 1 , 2 , , m are
said to be independent if
(m
)
\
Cr
{i Bi } = min Cr {i Bi }
(2.44)
i=1

for any sets B1 , B2 , , Bm of <.

1im

75

Section 2.5 - Independence

Theorem 2.23 The fuzzy variables 1 , 2 , , m are independent if and


only if
(m
)
[
Cr
{i Bi } = max Cr {i Bi }
(2.45)
1im

i=1

for any sets B1 , B2 , , Bm of <.


Proof: It follows from the self-duality of credibility measure that 1 , 2 , , m
are independent if and only if
(m
)
(m
)
[
\
Cr
{i Bi } = 1 Cr
{i Bic }
i=1

i=1

= 1 min Cr{i Bic } = max Cr {i Bi } .


1im

1im

Thus (2.45) is verified. The proof is complete.


Theorem 2.24 The fuzzy variables 1 , 2 , , m are independent if and
only if
(m
)
\
Cr
{i = xi } = min Cr {i = xi }
(2.46)
1im

i=1

for any real numbers x1 , x2 , , xm with Cr{m


i=1 {i = xi }} < 0.5.
Proof: If 1 , 2 , , m are independent, then we have (2.46) immediately
by taking Bi = {xi } for each i. Conversely, if Cr{m
i=1 {i Bi }} 0.5, it
follows from Theorem 2.2 that (2.44) holds. Otherwise, we have Cr{m
i=1 {i =
xi }} < 0.5 for any real numbers xi Bi , i = 1, 2, , m, and

(m
)
m

\
[
\
Cr
{i Bi } = Cr
{i = xi }

i=1
xi Bi ,1im i=1
(m
)
\
=
sup
Cr
{i = xi } =
sup
min Cr{i = xi }
xi Bi ,1im

= min

xi Bi ,1im 1im

i=1

sup Cr {i = xi } = min Cr {i Bi } .

1im xi Bi

1im

Hence (2.44) is true, and 1 , 2 , , m are independent. The theorem is thus


proved.
Theorem 2.25 Let i be membership functions of fuzzy variables i , i =
1, 2, , m, respectively, and the joint membership function of fuzzy vector
(1 , 2 , , m ). Then the fuzzy variables 1 , 2 , , m are independent if
and only if
(x1 , x2 , , xm ) = min i (xi )
(2.47)
1im

for any real numbers x1 , x2 , , xm .

76

Chapter 2 - Credibility Theory

Proof: Suppose that 1 , 2 , , m are independent. It follows from Theorem 2.24 that
(m
)!
\
(x1 , x2 , , xm ) = 2Cr
{i = xi }
1
i=1


=


2 min Cr{i = xi } 1
1im

= min (2Cr{i = xi }) 1 = min i (xi ).


1im

1im

Conversely, for any real numbers x1 , x2 , , xm with Cr{m


i=1 {i = xi }} <
0.5, we have
(m
)!
(m
)
\
\
1
2Cr
{i = xi }
1
Cr
{i = xi } =
2
i=1
i=1
1
1
(x1 , x2 , , xm ) =
min i (xi )
2
2 1im


1
=
min (2Cr {i = xi }) 1
2 1im

= min Cr {i = xi } .
1im

It follows from Theorem 2.24 that 1 , 2 , , m are independent. The theorem is proved.
Theorem 2.26 Let i be credibility distributions of fuzzy variables i , i =
1, 2, , m, respectively, and the joint credibility distribution of fuzzy vector
(1 , 2 , , m ). If 1 , 2 , , m are independent, then we have
(x1 , x2 , , xm ) = min i (xi )

(2.48)

1im

for any real numbers x1 , x2 , , xm .


Proof: Since 1 , 2 , , m are independent fuzzy variables, we have
(m
)
\
(x1 , x2 , , xm ) = Cr
{i xi } = min Cr{i xi } = min i (xi )
i=1

1im

1im

for any real numbers x1 , x2 , , xm . The theorem is proved.


Example 2.23: However, the equation (2.48) does not imply that the fuzzy
variables are independent. For example, let be a fuzzy variable with credibility distribution . Then the joint credibility distribution of fuzzy vector
(, ) is
(x1 , x2 ) = Cr{ x1 , x2 } = Cr{ x1 } Cr{ x2 } = (x1 ) (x2 )

77

Section 2.5 - Independence

for any real numbers x1 and x2 . But, generally speaking, a fuzzy variable is
not independent with itself.
Theorem 2.27 Let 1 , 2 , , m be independent fuzzy variables, and f1 , f2 ,
, fn are real-valued functions. Then f1 (1 ), f2 (2 ), , fm (m ) are independent fuzzy variables.
Proof: For any sets B1 , B2 , , Bm of <, we have
(m
)
(m
)
\
\
1
Cr
{fi (i ) Bi } = Cr
{i fi (Bi )}
i=1

i=1

= min Cr{i
1im

fi1 (Bi )}

= min Cr{fi (i ) Bi }.
1im

Thus f1 (1 ), f2 (2 ), , fm (m ) are independent fuzzy variables.


Theorem 2.28 (Extension Principle of Zadeh) Let 1 , 2 , , n be independent fuzzy variables with membership functions 1 , 2 , , n , respectively, and f : <n < a function. Then the membership function of
= f (1 , 2 , , n ) is derived from the membership functions 1 , 2 , , n
by
(x) =
sup
min i (xi )
(2.49)
x=f (x1 ,x2 , ,xn ) 1in

for any x <. Here we set (x) = 0 if there are not real numbers x1 , x2 , , xn
such that x = f (x1 , x2 , , xn ).
Proof: It follows from Definition 2.10 that the membership function of =
f (1 , 2 , , n ) is
(x) = (2Cr {f (1 , 2 , , n ) = x}) 1

[
= 2Cr
{1 = x1 , 2 = x2 , , n = xn } 1

x=f (x1 ,x2 , ,xn )


!
=

Cr{1 = x1 , 2 = x2 , , n = xn }

sup

x=f (x1 ,x2 , ,xn )

!
=
=
=

sup

min Cr{i = xi }

x=f (x1 ,x2 , ,xn ) 1kn

sup

min (2Cr{i = xi }) 1

sup

min i (xi ).

x=f (x1 ,x2 , ,xn ) 1kn


x=f (x1 ,x2 , ,xn ) 1in

The theorem is proved.

(by independence)

78

Chapter 2 - Credibility Theory

Remark 2.5: The extension principle of Zadeh is only applicable to the


operations on independent fuzzy variables. In the past literature, the extension principle is used as a postulate. However, it is treated as a theorem in
credibility theory.
Example 2.24: The sum of independent equipossible fuzzy variables =
(a1 , a2 ) and = (b1 , b2 ) is also an equipossible fuzzy variable, and
+ = (a1 + b1 , a2 + b2 ).
Their product is also an equipossible fuzzy variable, and


=
min
xy,
max
xy .
a1 xa2 ,b1 yb2

a1 xa2 ,b1 yb2

Example 2.25: The sum of independent triangular fuzzy variables =


(a1 , a2 , a3 ) and = (b1 , b2 , b3 ) is also a triangular fuzzy variable, and
+ = (a1 + b1 , a2 + b2 , a3 + b3 ).
The product of a triangular fuzzy variable = (a1 , a2 , a3 ) and a scalar number
is
(
(a1 , a2 , a3 ), if 0
=
(a3 , a2 , a1 ), if < 0.
That is, the product of a triangular fuzzy variable and a scalar number is
also a triangular fuzzy variable. However, the product of two triangular fuzzy
variables is not a triangular one.
Example 2.26: The sum of independent trapezoidal fuzzy variables =
(a1 , a2 , a3 , a4 ) and = (b1 , b2 , b3 , b4 ) is also a trapezoidal fuzzy variable, and
+ = (a1 + b1 , a2 + b2 , a3 + b3 , a4 + b4 ).
The product of a trapezoidal fuzzy variable = (a1 , a2 , a3 , a4 ) and a scalar
number is
(
(a1 , a2 , a3 , a4 ), if 0
=
(a4 , a3 , a2 , a1 ), if < 0.
That is, the product of a trapezoidal fuzzy variable and a scalar number is
also a trapezoidal fuzzy variable. However, the product of two trapezoidal
fuzzy variables is not a trapezoidal one.
Example 2.27: Let 1 , 2 , , n be independent fuzzy variables with membership functions 1 , 2 , , n , respectively, and f : <n < a function.
Then for any set B of real numbers, the credibility Cr{f (1 , 2 , , n ) B}
is
!
1
sup
min i (xi ) + 1
sup
min i (xi ) .
2 f (x1 ,x2 , ,xn )B 1in
f (x1 ,x2 , ,xn )B c 1in

Section 2.6 - Identical Distribution

2.6

79

Identical Distribution

Definition 2.18 (Liu [129]) The fuzzy variables and are said to be identically distributed if
Cr{ B} = Cr{ B}
(2.50)
for any set B of <.
Theorem 2.29 The fuzzy variables and are identically distributed if and
only if and have the same membership function.
Proof: Let and be the membership functions of and , respectively. If
and are identically distributed fuzzy variables, then, for any x <, we
have
(x) = (2Cr{ = x}) 1 = (2Cr{ = x}) 1 = (x).
Thus and have the same membership function.
Conversely, if and have the same membership function, i.e., (x)
(x), then, by using the credibility inversion theorem, we have


1
sup (x) + 1 sup (x)
Cr{ B} =
2 xB
xB c


1
=
sup (x) + 1 sup (x) = Cr{ B}
2 xB
xB c
for any set B of <. Thus and are identically distributed fuzzy variables.
Theorem 2.30 The fuzzy variables and are identically distributed if and
only if Cr{ = x} = Cr{ = x} for each x <.
Proof: If and are identically distributed fuzzy variables, then we immediately have Cr{ = x} = Cr{ = x} for each x. Conversely, it follows
from
(x) = (2Cr{ = x}) 1 = (2Cr{ = x}) 1 = (x)
that and have the same membership function. Thus and are identically
distributed fuzzy variables.
Theorem 2.31 If and are identically distributed fuzzy variables, then
and have the same credibility distribution.
Proof: If and are identically distributed fuzzy variables, then, for any
x <, we have Cr{ (, x]} = Cr{ (, x]}. Thus and have the
same credibility distribution.
Example 2.28: The inverse of Theorem 2.31 is not true. We consider two
fuzzy variables with the following membership functions,

1.0, if x = 0
1.0, if x = 0
0.6, if x = 1
0.7, if x = 1
(x) =
(x) =

0.8, if x = 2,
0.8, if x = 2.

80

Chapter 2 - Credibility Theory

It is easy to verify that and have the same credibility distribution,

0, if x < 0
0.6, if 0 x < 2
(x) =

1, if x 2.
However, they are not identically distributed fuzzy variables.
Theorem 2.32 Let and be two fuzzy variables whose credibility density
functions exist. If and are identically distributed, then they have the same
credibility density function.
Proof: It follows from Theorem 2.31 that the fuzzy variables and have the
same credibility distribution. Hence they have the same credibility density
function.

2.7

Expected Value

There are many ways to define an expected value operator for fuzzy variables. See, for example, Dubois and Prade [34], Heilpern [59], Campos and
Gonz
alez [13], Gonz
alez [53] and Yager [225][236]. The most general definition of expected value operator of fuzzy variable was given by Liu and Liu
[126]. This definition is applicable to not only continuous fuzzy variables but
also discrete ones.
Definition 2.19 (Liu and Liu [126]) Let be a fuzzy variable. Then the
expected value of is defined by
Z

Cr{ r}dr

E[] =

Cr{ r}dr

(2.51)

provided that at least one of the two integrals is finite.


Example 2.29: Let be the equipossible
then Cr{ r} 0 when r < 0, and

1, if
0.5, if
Cr{ r} =

0, if
Z
E[] =

Z
1dr +

Z
0.5dr +

fuzzy variable (a, b). If a 0,

ra
a<rb
r > b,
!

0dr
b

0dr =

a+b
.
2

81

Section 2.7 - Expected Value

If b 0, then Cr{ r} 0 when r > 0,

1,
0.5,
Cr{ r} =

0,
Z

0dr

E[] =
0

Z
0dr +

and
if r b
if a r < b
if r < a,
b

Z
0.5dr +

1dr
b

a+b
.
2

If a < 0 < b, then


(
Cr{ r} =

0.5, if 0 r b
0, if r > b,

0, if r < a
0.5, if a r 0,
! Z

Z b
Z +
Z 0
a
a+b
.
0.5dr +
0dr
0dr +
0.5dr =
2
0
b

a
Cr{ r} =

E[] =

Thus we always have the expected value (a + b)/2.


Example 2.30: The triangular fuzzy variable = (a, b, c) has an expected
value E[] = (a + 2b + c)/4.
Example 2.31: The trapezoidal fuzzy variable = (a, b, c, d) has an expected value E[] = (a + b + c + d)/4.
Example 2.32: Let be a continuous nonnegative fuzzy variable with membership function . If is decreasing on [0, +), then Cr{ x} = (x)/2
for any x > 0, and
Z
1 +
E[] =
(x)dx.
2 0
Example 2.33: Let be a continuous fuzzy variable with membership function . If its expected value exists, and there is a point x0 such that (x) is
increasing on (, x0 ) and decreasing on (x0 , +), then
Z
Z
1 +
1 x0
E[] = x0 +
(x)dx
(x)dx.
2 x0
2
Example 2.34: Let be a fuzzy variable with membership function

0, if x < 0
x, if 0 x 1
(x) =

1, if x > 1.

82

Chapter 2 - Credibility Theory

Then its expected value is +. If is a fuzzy variable with membership


function

if x < 0

1,
1 x, if 0 x 1
(x) =

0,
if x > 1.
Then its expected value is .
Example 2.35: The expected value may not exist for some fuzzy variables.
For example, the fuzzy variable with membership function
(x) =

1
,
1 + |x|

x <

does not have expected value because both of the integrals


Z +
Z 0
Cr{ r}dr and
Cr{ r}dr

are infinite.
Example 2.36: The definition of expected value operator is also applicable
to discrete case. Assume that is a simple fuzzy variable whose membership
function is given by

1 , if x = x1

2 , if x = x2
(x) =
(2.52)

m , if x = xm
where x1 , x2 , , xm are distinct numbers. Note that 1 2 m = 1.
Definition 2.19 implies that the expected value of is
E[] =

m
X

wi xi

(2.53)

i=1

where the weights are given by



1
wi =
max {j |xj xi } max {j |xj < xi }
1jm
2 1jm

+ max {j |xj xi } max {j |xj > xi }
1jm

1jm

for i = 1, 2, , m. It is easy to verify that all wi 0 and the sum of all


weights is just 1.
Example 2.37: Consider the fuzzy variable defined by (2.52). Suppose
x1 < x2 < < xm . Then the expected value is determined by (2.53) and
the weights are given by


1
wi =
max j max j + max j max j
1j<i
ijm
i<jm
2 1ji

83

Section 2.7 - Expected Value

for i = 1, 2, , m.
Example 2.38: Consider the fuzzy variable defined by (2.52). Suppose
x1 < x2 < < xm and there exists an index k with 1 < k < m such that
1 2 k

and k k+1 m .

Note that k 1. Then the expected value is determined by (2.53) and the
weights are given by

,
if i = 1

i i1

,
if i = 2, 3, , k 1

k1 + k+1
wi =
, if i = k
1

i i+1

,
if i = k + 1, k + 2, , m 1

,
if i = m.
2
Example 2.39: Consider the fuzzy variable defined by (2.52). Suppose
x1 < x2 < < xm and 1 2 m (m 1). Then the expected
value is determined by (2.53) and the weights are given by

,
if i = 1

i i1
wi =
, if i = 2, 3, , m 1

1 m1 , if i = m.
2
Example 2.40: Consider the fuzzy variable defined by (2.52). Suppose
x1 < x2 < < xm and 1 2 m (1 1). Then the expected
value is determined by (2.53) and the weights are given by

1
,
if i = 1

i i+1
wi =
, if i = 2, 3, , m 1

,
if i = m.
2
Theorem 2.33 (Liu [124]) Let be a fuzzy variable whose credibility density
function exists. If the Lebesgue integral
Z +
x(x)dx

84

Chapter 2 - Credibility Theory

is finite, then we have


+

x(x)dx.

E[] =

(2.54)

Proof: It follows from the definition of expected value operator and Fubini
Theorem that
Z +
Z 0
E[] =
Cr{ r}dr
Cr{ r}dr

Z

Z

Z

Z

(x)dr dx


(x)dx dr


(x)dr dx

x(x)dx +

x(x)dx


Z
(x)dx dr

x(x)dx.

The theorem is proved.


Example 2.41: Let be a fuzzy variable with credibility distribution .
Generally speaking,
Z +
E[] 6=
xd(x).

For example, let be a fuzzy variable

0,
x,
(x) =

1,

with membership function


if x < 0
if 0 x 1
if x > 1.

Then E[] = +. However


Z +
xd(x) =

1
6= +.
4

Theorem 2.34 (Liu [129]) Let be a fuzzy variable with credibility distribution . If
lim (x) = 0,
lim (x) = 1
x

and the Lebesgue-Stieltjes integral


Z +
xd(x)

85

Section 2.7 - Expected Value

is finite, then we have


Z

xd(x).

E[] =

(2.55)

R +
Proof: Since the Lebesgue-Stieltjes integral xd(x) is finite, we immediately have
Z 0
Z 0
Z +
Z y
xd(x)
xd(x) =
xd(x),
lim
xd(x) =
lim
y+

and
Z

y+

Z
xd(x) = 0,

lim

xd(x) = 0.

lim

It follows from


Z +
xd(x) y
lim (z) (y) = y (1 (y)) 0,
z+



xd(x) y (y) lim (z) = y(y) 0,
z

for y > 0,

for y < 0

that
lim y (1 (y)) = 0,

y+

lim y(y) = 0.

Let 0 = x0 < x1 < x2 < < xn = y be a partition of [0, y]. Then we have
n1
X

xi ((xi+1 ) (xi ))

xd(x)
0

i=0

and

n1
X

Z
(1 (xi+1 ))(xi+1 xi )

Cr{ r}dr
0

i=0

as max{|xi+1 xi | : i = 0, 1, , n 1} 0. Since
n1
X

xi ((xi+1 ) (xi ))

i=0

n1
X

(1 (xi+1 )(xi+1 xi ) = y((y) 1) 0

i=0

as y +. This fact implies that


Z +
Z
Cr{ r}dr =
0

xd(x).

A similar way may prove that


Z 0
Z

Cr{ r}dr =

It follows that the equation (2.55) holds.

xd(x).

86

Chapter 2 - Credibility Theory

Linearity of Expected Value Operator


Theorem 2.35 (Liu and Liu [141]) Let and be independent fuzzy variables with finite expected values. Then for any numbers a and b, we have
E[a + b] = aE[] + bE[].

(2.56)

Proof: Step 1: We first prove that E[ + b] = E[] + b for any real number
b. If b 0, we have
Z

Cr{ + b r}dr

E[ + b] =
Z

Cr{ + b r}dr

Z 0

Cr{ r b}dr

Cr{ r b}dr

(Cr{ r b} + Cr{ < r b}) dr

= E[] +
0

= E[] + b.
If b < 0, then we have
Z
E[ + b] = E[]

(Cr{ r b} + Cr{ < r b}) dr = E[] + b.


b

Step 2: We prove that E[a] = aE[] for any real number a. If a = 0,


then the equation E[a] = aE[] holds trivially. If a > 0, we have
Z

Cr{a r}dr

E[a] =
Z

Cr{a r}dr

Z 0

n
n
ro
ro
dr
dr
Cr
Cr
a
a

0
Z n
Z 0
n
ro r
ro r
=a
Cr
d
a
Cr
d
= aE[].
a
a
a
a
0

If a < 0, we have
Z

Cr{a r}dr

E[a] =
0

Cr{a r}dr

Z 0

n
n
ro
ro
Cr
dr
Cr
dr
a
a
0

Z n
Z 0
n
ro r
ro r
=a
Cr
d
a
Cr
d
= aE[].
a
a
a
a
0

87

Section 2.7 - Expected Value

Step 3: We prove that E[ + ] = E[] + E[] when both and are


simple fuzzy variables with the following membership functions,

1 , if x = a1
1 , if x = b1

2 , if x = a2
2 , if x = b2
(x) =
(x) =

m , if x = am ,
n , if x = bn .
Then + is also a simple fuzzy variable taking values ai +bj with membership
degrees i j , i = 1, 2, , m, j = 1, 2, , n, respectively. Now we define

1
0
max {k |ak ai } max {k |ak < ai }
wi =
1km
2 1km

+ max {k |ak ai } max {k |ak > ai } ,
1km

wj00 =

1
2

1km


max {l |bl bi } max {l |bl < bi }

1ln

1ln


+ max {l |bl bi } max {l |bl > bi } ,
1ln

wij =

1
2

1ln


max

1km,1ln

{k l |ak + bl ai + bj }

max

{k l |ak + bl < ai + bj }

max

{k l |ak + bl ai + bj }

1km,1ln
1km,1ln

max

1km,1ln

{k l |ak + bl > ai + bj }

for i = 1, 2, , m and j = 1, 2, , n. It is also easy to verify that


wi0 =

n
X

wij ,

j=1

wj00 =

m
X

wij

i=1

for i = 1, 2, , m and j = 1, 2, , n. If {ai }, {bj } and {ai + bj } are


sequences consisting of distinct elements, then
E[] =

m
X
i=1

ai wi0 ,

E[] =

n
X
j=1

bj wj00 ,

E[ + ] =

m X
n
X

(ai + bj )wij .

i=1 j=1

Thus E[ + ] = E[] + E[]. If not, we may give them a small perturbation


such that they are distinct, and prove the linearity by letting the perturbation
tend to zero.

88

Chapter 2 - Credibility Theory

Step 4: We prove that E[ + ] = E[] + E[] when and are fuzzy


variables such that
1
Cr{ 0},
2
1
lim Cr{ y} Cr{ 0}.
y0
2

lim Cr{ y}
y0

(2.57)

We define simple fuzzy variables i via credibility distributions as follows,

k1
k1

, if
Cr{ x} <

2
2i

k
k1
i (x) =
,
if
Cr{ x} <

2
2i

1,
if Cr{ x} = 1

k
, k = 1, 2, , 2i1
2i
k
, k = 2i1 + 1, , 2i
2i

for i = 1, 2, Thus {i } is a sequence of simple fuzzy variables satisfying


Cr{i r} Cr{ r}, if r 0
Cr{i r} Cr{ r}, if r 0
as i . Similarly, we define simple fuzzy variables i via credibility
distributions as follows,

k1
k1

, if
Cr{ x} <

2
2i

k
k1
i (x) =
,
if
Cr{ x} <

2i
2

1,
if Cr{ x} = 1

k
, k = 1, 2, , 2i1
2i
k
, k = 2i1 + 1, , 2i
2i

for i = 1, 2, Thus {i } is a sequence of simple fuzzy variables satisfying


Cr{i r} Cr{ r}, if r 0
Cr{i r} Cr{ r}, if r 0
as i . It is also clear that {i +i } is a sequence of simple fuzzy variables.
Furthermore, when r 0, it follows from (2.57) that
lim Cr{i + i r} = lim

sup

i x0,y0,x+yr

=
=

Cr{i x} Cr{i y}

sup

lim Cr{i x} Cr{i y}

sup

Cr{ x} Cr{ y}

x0,y0,x+yr i
x0,y0,x+yr

= Cr{ + r}.

89

Section 2.7 - Expected Value

That is,
Cr{i + i r} Cr{ + r}, if r 0.
A similar way may prove that
Cr{i + i r} Cr{ + r}, if r 0.
Since the expected values E[] and E[] exist, we have
+

Cr{i r}dr

Z 0

Cr{ r}dr = E[],

Cr{ r}dr

0
+

Z
Cr{i r}dr

E[i ] =

Cr{i r}dr

Z 0

Cr{ r}dr

Cr{ r}dr = E[],

Z
Cr{i r}dr

E[i ] =

Cr{i + i r}dr

E[i + i ] =

Cr{i + i r}dr

0
+

Cr{ + r}dr

Cr{ + r}dr = E[ + ]

as i . It follows from Step 3 that E[ + ] = E[] + E[].


Step 5: We prove that E[ + ] = E[] + E[] when and are arbitrary fuzzy variables. Since they have finite expected values, there exist two
numbers c and d such that
1
Cr{ + c 0},
y0
2
1
lim Cr{ + d y} Cr{ + d 0}.
y0
2
lim Cr{ + c y}

It follows from Steps 1 and 4 that


E[ + ] = E[( + c) + ( + d) c d]
= E[( + c) + ( + d)] c d
= E[ + c] + E[ + d] c d
= E[] + c + E[] + d c d
= E[] + E[].

90

Chapter 2 - Credibility Theory

Step 6: We prove that E[a + b] = aE[] + bE[] for any real numbers
a and b. In fact, the equation follows immediately from Steps 2 and 5. The
theorem is proved.
Example 2.42: Theorem 2.35 does not hold if and are not independent.
For example, take (, P, Cr) to be {1 , 2 , 3 } with Cr{1 } = 0.7, Cr{2 } =
0.3 and Cr{3 } = 0.2. The fuzzy variables are defined by

0, if = 1
1, if = 1
2, if = 2
0, if = 2
2 () =
1 () =

3, if = 3 .
2, if = 3 ,
Then we have

1, if = 1
2, if = 2
(1 + 2 )() =

5, if = 3 .

Thus E[1 ] = 0.9, E[2 ] = 0.8, and E[1 + 2 ] = 1.9. This fact implies that
E[1 + 2 ] > E[1 ] + E[2 ].
If the fuzzy variables are

0,
1,
1 () =

2,
Then we have

defined by
if = 1
if = 2
if = 3 ,

0, if = 1
3, if = 2
2 () =

1, if = 3 .

0, if = 1
4, if = 2
(1 + 2 )() =

3, if = 3 .

Thus E[1 ] = 0.5, E[2 ] = 0.9, and E[1 + 2 ] = 1.2. This fact implies that
E[1 + 2 ] < E[1 ] + E[2 ].
Expected Value of Function of Fuzzy Variable
Let be a fuzzy variable, and f : < < a function. Then the expected
value of f () is
Z

Cr{f () r}dr

E[f ()] =
0

Cr{f () r}dr.

For random case, it has been proved that the expected value E[f ()] is the
Lebesgue-Stieltjes integral of f (x) with respect to the probability distribution

91

Section 2.7 - Expected Value

of if the integral exists. However, generally speaking, it is not true for


fuzzy case.
Example 2.43: We consider a fuzzy variable whose membership function
is given by

0.6, if 1 x < 0
1, if 0 x 1
(x) =

0, otherwise.
Then the expected value E[ 2 ] = 0.5. However, the credibility distribution
of is

0, if x < 1

0.3, if 1 x < 0
(x) =

0.5, if 0 x < 1

1, if x 1
and the Lebesgue-Stieltjes integral
Z +
x2 d(x) = (1)2 0.3 + 02 0.2 + 12 0.5 = 0.8 6= E[ 2 ].

Remark 2.6: When f (x) is a monotone and continuous function, Zhu and
Ji [263] proved that
Z +
E[f ()] =
f (x)d(x)
(2.58)

where is the credibility distribution of .


Sum of a Fuzzy Number of Fuzzy Variables
Theorem 2.36 (Zhao and Liu [251]) Assume that {i } is a sequence of iid
fuzzy variables, and n
is a positive fuzzy integer (i.e., a fuzzy variable taking
positive integer values) that is independent of the sequence {i }. Then we
have
" n #
X
E
i = E [
n1 ] .
(2.59)
i=1

Proof: Since {i } is a sequence of iid fuzzy variables and n


is independent
of {i }, we have
 n



P
Cr
i r =
sup
Cr{
n = n} min Cr {i = xi }
i=1

1in

n,x1 +x2 ++xn r


sup Cr{
n = n}
nxr


min Cr {i = x}

1in

= sup Cr{
n = n} Cr{1 = x}
nxr

= Cr {
n1 r} .

92

Chapter 2 - Credibility Theory

On the other hand, for any given > 0, there exists an integer n and real
numbers x1 , x2 , , xn with x1 + x2 + + xn r such that
( n
)
X
Cr
i r Cr{
n = n} Cr{i = xi }
i=1

for each i with 1 i n. Without loss of generality, we assume that nx1 r.


Then we have
( n
)
X
Cr
i r Cr{
n = n} Cr{1 = x1 } Cr {
n1 r} .
i=1

Letting 0, we get
Cr

( n
X

)
i r

Cr {
n1 r} .

i=1

It follows that
Cr

( n
X

)
i r

= Cr {
n1 r} .

i=1

Similarly, the above identity still holds if the symbol is replaced with
. Finally, by the definition of expected value operator, we have
" n # Z
( n
)
( n
)
Z 0
+
X
X
X
E
i =
Cr
i r dr
Cr
i r dr
0

i=1

i=1
+

Z
Cr {
n1 r} dr

=
0

i=1

Cr {
n1 r} dr = E [
n1 ] .

The theorem is proved.

2.8

Variance

Definition 2.20 (Liu and Liu [126]) Let be a fuzzy variable with finite
expected value e. Then the variance of is defined by V [] = E[( e)2 ].
The variance of a fuzzy variable provides a measure of the spread of the
distribution around its expected value.
Example 2.44: Let be an equipossible fuzzy variable (a, b). Then its
expected value is e = (a + b)/2, and for any positive number r, we have
(
1/2, if r (b a)2 /4
2
Cr{( e) r} =
0, if r > (b a)2 /4.

93

Section 2.8 - Variance

Thus the variance is


Z

(ba)2 /4

Cr{( e) r}dr =

V [] =

1
(b a)2
dr =
.
2
8

Example 2.45: Let = (a, b, c) be a symmetric triangular fuzzy variable,


i.e., b a = c b. Then its variance is V [] = (c a)2 /24.
Example 2.46: Let = (a, b, c, d) be a symmetric trapezoidal fuzzy variable,
i.e., b a = d c. Then its variance is V [] = ((d a)2 + (d a)(c b) +
(c b)2 )/24.
Example 2.47: A fuzzy variable is called normally distributed if it has a
normal membership function
1


|x e|

,
(x) = 2 1 + exp
6

x <, > 0.

(2.60)

The expected value is e and variance is 2 . Let 1 and 2 be independently


and normally distributed fuzzy variables with expected values e1 and e2 ,
variances 12 and 22 , respectively. Then for any real numbers a1 and a2 , the
fuzzy variable a1 1 + a2 2 is also normally distributed with expected value
a1 e1 + a2 e2 and variance (|a1 |1 + |a2 |2 )2 .
(x)
.....
.......
..

1 ........ . . . . . . . . . . . . . . . . ............................

.
.
.... . ....
....
.... .. ........
...
....
....
.
..
....
....
.
.....
....
.
.
...
.
.
.....
..
.
...
..
.....
... . . . . . . . . ............ . . . . . . . .. . . . . . . . ...........
.
.
.
.........
.
...
.
.
.... .
. .......
.
.
.
.
...
.
..
.......
.
.
.
... ............
.......
.
.
.
..........
.........
.
.
.
.
............
.....
.
.
.
.
.
.
.
.
.
.
.
.
.
.
....................
.
.
.
..
....................
.
.
.
.
.
.
.
.
.
.
.
.
.
...................................................................................................................................................................................................................................................................
..
..
...

0.434
0

e+

Figure 2.3: Normal Membership Function

Theorem 2.37 If is a fuzzy variable whose variance exists, a and b are


real numbers, then V [a + b] = a2 V [].
Proof: It follows from the definition of variance that


V [a + b] = E (a + b aE[] b)2 = a2 E[( E[])2 ] = a2 V [].
Theorem 2.38 Let be a fuzzy variable with expected value e. Then V [] =
0 if and only if Cr{ = e} = 1.

94

Chapter 2 - Credibility Theory

Proof: If V [] = 0, then E[( e)2 ] = 0. Note that


Z +
Cr{( e)2 r}dr
E[( e)2 ] =
0
2

which implies Cr{(e) r} = 0 for any r > 0. Hence we have Cr{(e)2 =


0} = 1, i.e., Cr{ = e} = 1. Conversely, if Cr{ = e} = 1, then we have
Cr{( e)2 = 0} = 1 and Cr{( e)2 r} = 0 for any r > 0. Thus
Z +
Cr{( e)2 r}dr = 0.
V [] =
0

Maximum Variance Theorem


Let be a fuzzy variable that takes values in [a, b], but whose membership
function is otherwise arbitrary. When its expected value is given, the maximum variance theorem will provide the maximum variance of , thus playing
an important role in treating games against nature.
Theorem 2.39 (Li and Liu [104]) Let f be a convex function on [a, b], and
a fuzzy variable that takes values in [a, b] and has expected value e. Then
E[f ()]

be
ea
f (a) +
f (b).
ba
ba

(2.61)

Proof: For each , we have a () b and


() =

() a
b ()
a+
b.
ba
ba

It follows from the convexity of f that


f (())

b ()
() a
f (a) +
f (b).
ba
ba

Taking expected values on both sides, we obtain (2.61).


Theorem 2.40 (Li and Liu [104], Maximum Variance Theorem) Let be a
fuzzy variable that takes values in [a, b] and has expected value e. Then
V [] (e a)(b e)
and equality holds if the fuzzy variable has membership function

2(b e)

1, if x = a

ba
(x) =

2(e a) 1, if x = b.
ba

(2.62)

(2.63)

Proof: It follows from Theorem 2.39 immediately by defining f (x) = (xe)2 .


It is also easy to verify that the fuzzy variable determined by (2.63) has
variance (e a)(b e). The theorem is proved.

95

Section 2.9 - Moments

2.9

Moments

Definition 2.21 (Liu [128]) Let be a fuzzy variable, and k a positive number. Then
(a) the expected value E[ k ] is called the kth moment;
(b) the expected value E[||k ] is called the kth absolute moment;
(c) the expected value E[( E[])k ] is called the kth central moment;
(d) the expected value E[| E[]|k ] is called the kth absolute central moment.
Note that the first central moment is always 0, the first moment is just
the expected value, and the second central moment is just the variance.
Example 2.48: A fuzzy variable is called exponentially distributed if it
has an exponential membership function
1


x
, x 0, m > 0.
(2.64)
(x) = 2 1 + exp
6m

The expected value is ( 6m ln 2)/ and the second moment is m2 . Let 1


and 2 be independently and exponentially distributed fuzzy variables with
second moments m21 and m22 , respectively. Then for any positive real numbers
a1 and a2 , the fuzzy variable a1 1 +a2 2 is also exponentially distributed with
second moment (a1 m1 + a2 m2 )2 .
(x)
..
.........
...
.....
.
.... ........
... ......
....
...
.....
...
....
.....
...
.....
...
..
... . . . . . . . .........
.....
...
. ........
...
. ..
. ...........
...
.......
.
...
.........
.
............
...
.
...................
.
...
..
...................................................................................................................................................................................................................
..
.....

0.434
0

Figure 2.4: Exponential Membership Function


Theorem 2.41 Let be a nonnegative fuzzy variable, and k a positive number. Then the k-th moment
Z +
k
E[ ] = k
rk1 Cr{ r}dr.
(2.65)
0

Proof: It follows from the nonnegativity of that


Z
Z
Z
E[ k ] =
Cr{ k x}dx =
Cr{ r}drk = k
0

The theorem is proved.

rk1 Cr{ r}dr.

96

Chapter 2 - Credibility Theory

Theorem 2.42 (Li and Liu [104]) Let be a fuzzy variable that takes values in [a, b] and has expected value e. Then for any positive integer k, the
kth absolute moment and kth absolute central moment satisfy the following
inequalities,
be k ea k
|a| +
|b| ,
(2.66)
E[||k ]
ba
ba
E[| e|k ]

be
ea
(e a)k +
(b e)k .
ba
ba

(2.67)

Proof: It follows from Theorem 2.39 immediately by defining f (x) = |x|k


and f (x) = |x e|k .

2.10

Critical Values

In order to rank fuzzy variables, we may use two critical values: optimistic
value and pessimistic value.
Definition 2.22 (Liu [124]) Let be a fuzzy variable, and (0, 1]. Then


sup () = sup r Cr { r}
(2.68)
is called the -optimistic value to , and


inf () = inf r Cr { r}

(2.69)

is called the -pessimistic value to .


This means that the fuzzy variable will reach upwards of the -optimistic
value sup () with credibility , and will be below the -pessimistic value
inf () with credibility . In other words, the -optimistic value sup () is
the supremum value that achieves with credibility , and the -pessimistic
value inf () is the infimum value that achieves with credibility .
Example 2.49: Let be an equipossible fuzzy variable on (a, b). Then its
-optimistic and -pessimistic values are
(
(
b, if 0.5
a, if 0.5
inf () =
sup () =
a, if > 0.5,
b, if > 0.5.
Example 2.50: Let = (a, b, c) be a triangular fuzzy variable. Then its
-optimistic and -pessimistic values are
(
2b + (1 2)c,
if 0.5
sup () =
(2 1)a + (2 2)b, if > 0.5,

97

Section 2.10 - Critical Values

(
inf () =

(1 2)a + 2b,
if 0.5
(2 2)b + (2 1)c, if > 0.5.

Example 2.51: Let = (a, b, c, d) be a trapezoidal fuzzy variable. Then its


-optimistic and -pessimistic values are
(
2c + (1 2)d,
if 0.5
sup () =
(2 1)a + (2 2)b, if > 0.5,
(
inf () =

(1 2)a + 2b,
(2 2)c + (2 1)d,

if 0.5
if > 0.5.

Theorem 2.43 Let be a fuzzy variable. If > 0.5, then we have


Cr{ inf ()} ,

Cr{ sup ()} .

(2.70)

Proof: It follows from the definition of -pessimistic value that there exists
a decreasing sequence {xi } such that Cr{ xi } and xi inf () as
i . Since { xi } { inf ()} and limi Cr{ xi } > 0.5, it
follows from the credibility semicontinuity law that
Cr{ inf ()} = lim Cr{ xi } .
i

Similarly, there exists an increasing sequence {xi } such that Cr{ xi }


and xi sup () as i . Since { xi } { sup ()} and
limi Cr{ xi } > 0.5, it follows from the credibility semicontinuity
law that
Cr{ sup ()} = lim Cr{ xi } .
i

The theorem is proved.


Example 2.52: When 0.5, it is possible that the inequalities
Cr{ inf ()} < ,

Cr{ sup ()} <

hold. Let be an equipossible fuzzy variable on (1, 1). It is clear that


inf (0.5) = 1. However, Cr{ inf (0.5)} = 0 < 0.5. In addition,
sup (0.5) = 1 and Cr{ sup (0.5)} = 0 < 0.5.
Theorem 2.44 Let be a fuzzy variable. Then we have
(a) inf () is an increasing and left-continuous function of ;
(b) sup () is a decreasing and left-continuous function of .
Proof: (a) It is easy to prove that inf () is an increasing function of .
Next, we prove the left-continuity of inf () with respect to . Let {i } be
an arbitrary sequence of positive numbers such that i . Then {inf (i )}

98

Chapter 2 - Credibility Theory

is an increasing sequence. If the limitation is equal to inf (), then the leftcontinuity is proved. Otherwise, there exists a number z such that
lim inf (i ) < z < inf ().

Thus Cr{ z } i for each i. Letting i , we get Cr{ z } .


Hence z inf (). A contradiction proves the left-continuity of inf () with
respect to . The part (b) may be proved similarly.
Theorem 2.45 Let be a fuzzy variable. Then we have
(a) if > 0.5, then inf () sup ();
(b) if 0.5, then inf () sup ().
Proof: Part (a): Write () = (inf () + sup ())/2. If inf () < sup (),
then we have
1 Cr{ < ()} + Cr{ > ()} + > 1.
A contradiction proves inf () sup (). Part (b): Assume that inf () >
sup (). It follows from the definition of inf () that Cr{ ()} < .
Similarly, it follows from the definition of sup () that Cr{ ()} < .
Thus
1 Cr{ ()} + Cr{ ()} < + 1.
A contradiction proves inf () sup (). The theorem is proved.
Theorem 2.46 Let be a fuzzy variable. Then we have
(a) if c 0, then (c)sup () = csup () and (c)inf () = cinf ();
(b) if c < 0, then (c)sup () = cinf () and (c)inf () = csup ().
Proof: If c = 0, then the part (a) is obviously valid. When c > 0, we have
(c)sup () = sup {r | Cr{c r} }
= c sup {r/c | Cr { r/c} }
= csup ().
A similar way may prove that (c)inf () = cinf ().
In order to prove the part (b), it suffices to verify that ()sup () =
inf () and ()inf () = sup (). In fact, for any (0, 1], we have
()sup () = sup{r | Cr{ r} }
= inf{r | Cr{ r} }
= inf ().
Similarly, we may prove that ()inf () = sup (). The theorem is proved.

99

Section 2.11 - Entropy

Theorem 2.47 Suppose that and are independent fuzzy variables. Then
for any (0, 1], we have
( + )sup () = sup () + sup (), ( + )inf () = inf () + inf (),
()sup () = sup ()sup (), ()inf () = inf ()inf (), if 0, 0,
( )sup () = sup () sup (), ( )inf () = inf () inf (),
( )sup () = sup () sup (), ( )inf () = inf () inf ().
Proof: For any given number > 0, since and are independent fuzzy
variables, we have
Cr{ + sup () + sup () }
Cr {{ sup () /2} { sup () /2}}
= Cr{ sup () /2} Cr{ sup () /2}
which implies
( + )sup () sup () + sup () .

(2.71)

On the other hand, by the independence, we have


Cr{ + sup () + sup () + }
Cr {{ sup () + /2} { sup () + /2}}
= Cr{ sup () + /2} Cr{ sup () + /2} <
which implies
( + )sup () sup () + sup () + .

(2.72)

It follows from (2.71) and (2.72) that


sup () + sup () + ( + )sup () sup () + sup () .
Letting 0, we obtain ( + )sup () = sup () + sup (). The other
equalities may be proved similarly.
Example 2.53: The independence condition cannot be removed in Theorem 2.47. For example, let = {1 , 2 }, Cr{1 } = Cr{2 } = 1/2, and let
fuzzy variables and be defined as
(
(
0, if = 1
1, if = 1
() =
() =
1, if = 2 ,
0, if = 2 .
However, ( + )sup (0.6) = 1 6= 0 = sup (0.6) + sup (0.6).

100

2.11

Chapter 2 - Credibility Theory

Entropy

Fuzzy entropy is a measure of uncertainty and has been studied by many


researchers such as De Luca and Termini [26], Kaufmann [75], Yager [223],
Kosko [85], Pal and Pal [177], Bhandari and Pal [7], and Pal and Bezdek
[181]. Those definitions of entropy characterize the uncertainty resulting
primarily from the linguistic vagueness rather than resulting from information
deficiency, and vanishes when the fuzzy variable is an equipossible one.
In order to measure the uncertainty of fuzzy variables, Liu [131] suggested
that an entropy of fuzzy variables should meet at least the following three
basic requirements:
(i) minimum: the entropy of a crisp number is minimum, i.e., 0;
(ii) maximum: the entropy of an equipossible fuzzy variable is maximum;
(iii) universality: the entropy is applicable not only to finite and infinite cases
but also to discrete and continuous cases.
In order to meet those requirements, Li and Liu [95] provided a new definition of fuzzy entropy to characterize the uncertainty resulting from information deficiency which is caused by the impossibility to predict the specified
value that a fuzzy variable takes.
Entropy of Discrete Fuzzy Variables
Definition 2.23 (Li and Liu [95]) Let be a discrete fuzzy variable taking
values in {x1 , x2 , }. Then its entropy is defined by
H[] =

S(Cr{ = xi })

(2.73)

i=1

where S(t) = t ln t (1 t) ln(1 t).


Remark 2.7: It is easy to verify that S(t) is a symmetric function about
t = 0.5, strictly increases on the interval [0, 0.5], strictly decreases on the
interval [0.5, 1], and reaches its unique maximum ln 2 at t = 0.5.
Remark 2.8: It is clear that the entropy depends only on the number of
values and their credibilities and does not depend on the actual values that
the fuzzy variable takes.
Example 2.54: Suppose that is a discrete fuzzy variable taking values in
{x1 , x2 , }. If there exists some index k such that the membership function
(xk ) = 1, and 0 otherwise, then its entropy H[] = 0.
Example 2.55: Suppose that is a simple fuzzy variable taking values
in {x1 , x2 , , xn }. If its membership function (x) 1, then its entropy
H[] = n ln 2.

101

Section 2.11 - Entropy

S(t)
...
..........
...
..
... . . . . . . . . . . . . . . . .......................
........
.
.....
...
......
......
.
...
.....
.....
.
.....
.....
...
.
.....
....
.
.
.
...
.
....
...
.
.
.
....
...
.
.
...
....
...
.
.
...
.
.
.
.
...
...
.
...
...
...
.
.
...
..
...
.
.
...
..
.
...
.
...
.
..
...
.
...
.
... ....
...
.
.
...
... ...
.
...
... ...
.
...
... ...
.
...
.
... ...
...
.
......
.
...
.
......
...
.
.....
....................................................................................................................................................................................
....
..
.

ln 2

0.5

Figure 2.5: Function S(t) = t ln t (1 t) ln(1 t)


Theorem 2.48 Suppose that is a discrete fuzzy variable taking values in
{x1 , x2 , }. Then
H[] 0
(2.74)
and equality holds if and only if is essentially a crisp number.
Proof: The nonnegativity is clear. In addition, H[] = 0 if and only if
Cr{ = xi } = 0 or 1 for each i. That is, there exists one and only one index
k such that Cr{ = xk } = 1, i.e., is essentially a crisp number.
This theorem states that the entropy of a fuzzy variable reaches its minimum 0 when the fuzzy variable degenerates to a crisp number. In this case,
there is no uncertainty.
Theorem 2.49 Suppose that is a simple fuzzy variable taking values in
{x1 , x2 , , xn }. Then
H[] n ln 2
(2.75)
and equality holds if and only if is an equipossible fuzzy variable.
Proof: Since the function S(t) reaches its maximum ln 2 at t = 0.5, we have
H[] =

n
X

S(Cr{ = xi }) n ln 2

i=1

and equality holds if and only if Cr{ = xi } = 0.5, i.e., (xi ) 1 for all
i = 1, 2, , n.
This theorem states that the entropy of a fuzzy variable reaches its maximum when the fuzzy variable is an equipossible one. In this case, there is
no preference among all the values that the fuzzy variable will take.

102

Chapter 2 - Credibility Theory

Entropy of Continuous Fuzzy Variables


Definition 2.24 (Li and Liu [95]) Let be a continuous fuzzy variable.
Then its entropy is defined by
Z +
H[] =
S(Cr{ = x})dx
(2.76)

where S(t) = t ln t (1 t) ln(1 t).


For any continuous fuzzy variable with membership function , we have
Cr{ = x} = (x)/2 for each x <. Thus

 

Z + 
(x) (x)
(x)
(x)
H[] =
ln
+ 1
ln 1
dx. (2.77)
2
2
2
2

Example 2.56: Let be an equipossible fuzzy variable (a, b). Then (x) = 1
if a x b, and 0 otherwise. Thus its entropy is

 

Z b
1
1
1 1
ln + 1
ln 1
dx = (b a) ln 2.
H[] =
2 2
2
2
a
Example 2.57: Let be a triangular fuzzy variable (a, b, c). Then its
entropy is H[] = (c a)/2.
Example 2.58: Let be a trapezoidal fuzzy variable (a, b, c, d). Then its
entropy is H[] = (d a)/2 + (ln 2 0.5)(c b).
Example 2.59: Let be an exponentially distributed
fuzzy variable with
second moment m2 . Then its entropy is H[] = m/ 6.
Example 2.60: Let be a normally distributed fuzzyvariable with expected
value e and variance 2 . Then its entropy is H[] = 6/3.
Theorem 2.50 Let be a continuous fuzzy variable. Then H[] > 0.
Proof: The positivity is clear. In addition, when a continuous fuzzy variable
tends to a crisp number, its entropy tends to the minimum 0. However, a
crisp number is not a continuous fuzzy variable.
Theorem 2.51 Let be a continuous fuzzy variable taking values on the
interval [a, b]. Then
H[] (b a) ln 2
(2.78)
and equality holds if and only if is an equipossible fuzzy variable (a, b).
Proof: The theorem follows from the fact that the function S(t) reaches its
maximum ln 2 at t = 0.5.

103

Section 2.11 - Entropy

Theorem 2.52 Let and be two continuous fuzzy variables with membership functions (x) and (x), respectively. If (x) (x) for any x <,
then we have H[] H[].
Proof: Since (x) (x), we have S((x)/2) S((x)/2) for any x <.
It follows that H[] H[].
Theorem 2.53 Let be a continuous fuzzy variable. Then for any real
numbers a and b, we have H[a + b] = |a|H[].
Proof: It follows from the definition of entropy that
Z +
Z +
H[a+b] =
S(Cr{a+b = x})dx = |a|
S(Cr{ = y})dy = |a|H[].

Maximum Entropy Principle


Given some constraints, for example, expected value and variance, there are
usually multiple compatible membership functions. Which membership function shall we take? The maximum entropy principle attempts to select the
membership function that maximizes the value of entropy and satisfies the
prescribed constraints.
Theorem 2.25 (Li and Liu [102]) Let be a continuous nonnegative fuzzy
variable with finite second moment m2 . Then
m
H[]
6

(2.79)

and the equality holds if is an exponentially distributed fuzzy variable with


second moment m2 .
Proof: Let be the membership function of . Note that is a continuous
function. The proof is based on the following two steps.
Step 1: Suppose that is a decreasing function on [0, +). For this
case, we have Cr{ x} = (x)/2 for any x > 0. Thus the second moment
Z +
Z +
Z +
E[ 2 ] =
Cr{ 2 x}dx =
2xCr{ x}dx =
x(x)dx.
0

The maximum entropy membership function should maximize the entropy



 

Z + 
(x)
(x)
(x) (x)
ln
+ 1
ln 1
dx

2
2
2
2
0
subject to the moment constraint
Z +
0

x(x)dx = m2 .

104

Chapter 2 - Credibility Theory

The Lagrangian is

 

Z + 
(x) (x)
(x)
(x)
L=
ln
+ 1
ln 1
dx
2
2
2
2
0
Z

+
2

x(x)dx m

The maximum entropy membership function meets Euler-Lagrange equation




1 (x) 1
(x)
ln
ln 1
+ x = 0
2
2
2
2
1

and has the form (x) = 2 (1 + exp(2x)) . Substituting it into the moment
constraint, we get
1


x
, x0
(x) = 2 1 + exp
6m
which is just the exponential membership function
with second moment m2 ,

and the maximum entropy is H[ ] = m/ 6.


Step 2: Let be a general fuzzy variable with second moment m2 . Now
we define a fuzzy variable b via membership function

b(x) = sup (y),

x 0.

yx

Then
b is a decreasing function on [0, +), and
Cr{b2 x} =

1
1
1
sup
b(y) =
sup sup (z) =
sup (z) Cr{ 2 x}
2 yx
2 yx zy
2 zx

for any x > 0. Thus we have


Z +
Z
E[b2 ] =
Cr{b2 x}dx
0

Cr{ 2 x}dx = E[ 2 ] = m2 .

It follows from (x)


b(x) and Step 1 that
q
E[b2 ]
m
b

H[] H[]
.
6
6
The theorem is thus proved.
Theorem 2.26 (Li and Liu [102]) Let be a continuous fuzzy variable with
finite expected value e and variance 2 . Then

6
(2.80)
H[]
3
and the equality holds if is a normally distributed fuzzy variable with expected
value e and variance 2 .

105

Section 2.11 - Entropy

Proof: Let be the continuous membership function of . The proof is


based on the following two steps.
Step 1: Let (x) be a unimodal and symmetric function about x = e.
For this case, the variance is
+

Cr{( e)2 x}dx =

V [] =

Cr{ e

x}dx

0
+

(x e)(x)dx

2(x e)Cr{ x}dx =

and the entropy is


+

H[] = 2
e


 

(x) (x)
(x)
(x)
ln
+ 1
ln 1
dx.
2
2
2
2

The maximum entropy membership function should maximize the entropy


subject to the variance constraint. The Lagrangian is
Z

L = 2
e


 

(x) (x)
(x)
(x)
ln
+ 1
ln 1
dx
2
2
2
2

Z

(x e)(x)dx


.

The maximum entropy membership function meets Euler-Lagrange equation




(x)
(x)
ln 1
+ (x e) = 0
ln
2
2
1

and has the form (x) = 2 (1 + exp ((x e)))


variance constraint, we get

(x) = 2 1 + exp

|x e|

. Substituting it into the

1
,

x<

which is just the normal membership function with


expected value e and
variance 2 , and the maximum entropy is H[ ] = 6/3.
Step 2: Let be a general fuzzy variable with expected value e and
variance 2 . We define a fuzzy variable b by the membership function

b(x) =

sup((y) (2e y)),

yx

if x e

sup ((y) (2e y)) , if x > e.


yx

106

Chapter 2 - Credibility Theory

It is easy to verify that


b(x) is a unimodal and symmetric function about
x = e. Furthermore,
n
o
1
1
Cr (b e)2 r =
b(x) =
sup
sup sup((y) (2e y))
2 xe+r
2 xe+r yx
1
1
sup ((y) (2e y)) =
sup (y)
2 ye+ r
2 (ye)2 r


Cr ( e)2 r
=

for any r > 0. Thus


Z
Z +
2
b
b
Cr{( e) r}dr
V [] =

Cr{( e)2 r}dr = 2 .

It follows from (x)


b(x) and Step 1 that
q

b
6 V []
6
b
H[] H[]

.
3
3
The proof is complete.

2.12

Distance

Distance between fuzzy variables has been defined in many ways, for example, Hausdorff distance (Puri and Ralescu [192], Klement et. al. [81]), and
Hamming distance (Kacprzyk [72]). However, those definitions have no identification property. In order to overcome this shortage, Liu [129] proposed a
definition of distance as follows.
Definition 2.27 (Liu [129]) The distance between fuzzy variables and is
defined as
d(, ) = E[| |].
(2.81)
Example 2.61: Let and be equipossible fuzzy variables (a1 , b1 ) and
(a2 , b2 ), respectively, and (a1 , b1 )(a2 , b2 ) = . Then | | is an equipossible
fuzzy variable on the interval with endpoints |a1 b2 | and |b1 a2 |. Thus
the distance between and is the expected value of | |, i.e.,
d(, ) =

1
(|a1 b2 | + |b1 a2 |) .
2

Example 2.62: Let = (a1 , b1 , c1 ) and = (a2 , b2 , c2 ) be triangular fuzzy


variables such that (a1 , c1 ) (a2 , c2 ) = . Then
d(, ) =

1
(|a1 c2 | + 2|b1 b2 | + |c1 a2 |) .
4

107

Section 2.13 - Inequalities

Example 2.63: Let = (a1 , b1 , c1 , d1 ) and = (a2 , b2 , c2 , d2 ) be trapezoidal


fuzzy variables such that (a1 , d1 ) (a2 , d2 ) = . Then
d(, ) =

1
(|a1 d2 | + |b1 c2 | + |c1 b2 | + |d1 a2 |) .
4

Theorem 2.54 (Li and Liu [105]) Let , , be fuzzy variables, and let d(, )
be the distance. Then we have
(a) (Nonnegativity) d(, ) 0;
(b) (Identification) d(, ) = 0 if and only if = ;
(c) (Symmetry) d(, ) = d(, );
(d) (Triangle Inequality) d(, ) 2d(, ) + 2d(, ).
Proof: The parts (a), (b) and (c) follow immediately from the definition.
Now we prove the part (d). It follows from the credibility subadditivity
theorem that
Z +
d(, ) =
Cr {| | r} dr
0

Cr {| | + | | r} dr
0

Cr {{| | r/2} {| | r/2}} dr


0

(Cr{| | r/2} + Cr{| | r/2}) dr

Z
Cr{| | r/2}dr +

=
0

Cr{| | r/2}dr
0

= 2E[| |] + 2E[| |] = 2d(, ) + 2d(, ).


Example 2.64: Let = {1 , 2 , 3 } and Cr{i } = 1/2 for i = 1, 2, 3. We
define fuzzy variables , and as follows,
(
(
1, if 6= 3
1, if 6= 1
() =
() =
() 0.
0, otherwise,
0, otherwise,
It is easy to verify that d(, ) = d(, ) = 1/2 and d(, ) = 3/2. Thus
d(, ) =

2.13

3
(d(, ) + d(, )).
2

Inequalities

There are several useful inequalities for random variable, such as Markov
inequality, Chebyshev inequality, Holders inequality, Minkowski inequality,

108

Chapter 2 - Credibility Theory

and Jensens inequality. This section introduces the analogous inequalities


for fuzzy variable.
Theorem 2.55 (Liu [128]) Let be a fuzzy variable, and f a nonnegative
function. If f is even and increasing on [0, ), then for any given number
t > 0, we have
E[f ()]
Cr{|| t}
.
(2.82)
f (t)
Proof: It is clear that Cr{|| f 1 (r)} is a monotone decreasing function
of r on [0, ). It follows from the nonnegativity of f () that
Z +
Cr{f () r}dr
E[f ()] =
0

Cr{|| f 1 (r)}dr

=
0

f (t)

Cr{|| f 1 (r)}dr

f (t)

dr Cr{|| f 1 (f (t))}

= f (t) Cr{|| t}
which proves the inequality.
Theorem 2.56 (Liu [128], Markov Inequality) Let be a fuzzy variable.
Then for any given numbers t > 0 and p > 0, we have
Cr{|| t}

E[||p ]
.
tp

(2.83)

Proof: It is a special case of Theorem 2.55 when f (x) = |x|p .


Theorem 2.57 (Liu [128], Chebyshev Inequality) Let be a fuzzy variable
whose variance V [] exists. Then for any given number t > 0, we have
Cr {| E[]| t}

V []
.
t2

(2.84)

Proof: It is a special case of Theorem 2.55 when the fuzzy variable is


replaced with E[], and f (x) = x2 .
Theorem 2.58 (Liu [128], H
olders Inequality) Let p and q be two positive
real numbers with 1/p+1/q = 1, and let and be independent fuzzy variables
with E[||p ] < and E[||q ] < . Then we have
p
p
E[||] p E[||p ] q E[||q ].
(2.85)

109

Section 2.13 - Inequalities

Proof: The inequality holds trivially if at least one of and is zero a.s. Now
we assumeE[||p ] > 0 and E[||q ] > 0. It is easy to prove that the function

f (x, y) = p x q y is a concave function on D = {(x, y) : x 0, y 0}. Thus


for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real numbers
a and b such that
f (x, y) f (x0 , y0 ) a(x x0 ) + b(y y0 ),

(x, y) D.

Letting x0 = E[||p ], y0 = E[||q ], x = ||p and y = ||q , we have


f (||p , ||q ) f (E[||p ], E[||q ]) a(||p E[||p ]) + b(||q E[||q ]).
Taking the expected values on both sides, we obtain
E[f (||p , ||q )] f (E[||p ], E[||q ]).
Hence the inequality (2.85) holds.
Theorem 2.59 (Liu [128], Minkowski Inequality) Let p be a real number
with p 1, and let and be independent fuzzy variables with E[||p ] <
and E[||p ] < . Then we have
p
p
p
p
E[| + |p ] p E[||p ] + p E[||p ].
(2.86)
Proof: The inequality holds trivially if at least one of and is zero a.s. Now
we assume
E[||p ] > 0 and E[||p ] > 0. It is easy to prove that the function

f (x, y) = ( p x + p y)p is a concave function on D = {(x, y) : x 0, y 0}.


Thus for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real
numbers a and b such that
f (x, y) f (x0 , y0 ) a(x x0 ) + b(y y0 ),

(x, y) D.

Letting x0 = E[||p ], y0 = E[||p ], x = ||p and y = ||p , we have


f (||p , ||p ) f (E[||p ], E[||p ]) a(||p E[||p ]) + b(||p E[||p ]).
Taking the expected values on both sides, we obtain
E[f (||p , ||p )] f (E[||p ], E[||p ]).
Hence the inequality (2.86) holds.
Theorem 2.60 (Liu [129], Jensens Inequality) Let be a fuzzy variable,
and f : < < a convex function. If E[] and E[f ()] are finite, then
f (E[]) E[f ()].
Especially, when f (x) = |x|p and p 1, we have |E[]|p E[||p ].

(2.87)

110

Chapter 2 - Credibility Theory

Proof: Since f is a convex function, for each y, there exists a number k such
that f (x) f (y) k (x y). Replacing x with and y with E[], we obtain
f () f (E[]) k ( E[]).
Taking the expected values on both sides, we have
E[f ()] f (E[]) k (E[] E[]) = 0
which proves the inequality.

2.14

Convergence Concepts

This section discusses some convergence concepts of fuzzy sequence: convergence almost surely (a.s.), convergence in credibility, convergence in mean,
and convergence in distribution.
Table 2.1: Relations among Convergence Concepts
Convergence Almost Surely
Convergence
in Mean

Convergence

in Credibility

&

Convergence in Distribution

Definition 2.28 (Liu [128]) Suppose that , 1 , 2 , are fuzzy variables defined on the credibility space (, P, Cr). The sequence {i } is said to be convergent a.s. to if and only if there exists an event A with Cr{A} = 1 such
that
lim |i () ()| = 0
(2.88)
i

for every A. In that case we write i , a.s.


Definition 2.29 (Liu [128]) Suppose that , 1 , 2 , are fuzzy variables defined on the credibility space (, P, Cr). We say that the sequence {i } converges in credibility to if
lim Cr {|i | } = 0

(2.89)

for every > 0.


Definition 2.30 (Liu [128]) Suppose that , 1 , 2 , are fuzzy variables
with finite expected values defined on the credibility space (, P, Cr). We
say that the sequence {i } converges in mean to if
lim E[|i |] = 0.

(2.90)

111

Section 2.14 - Convergence Concepts

In addition, the sequence {i } is said to converge in mean square to if


lim E[|i |2 ] = 0.

(2.91)

Definition 2.31 (Liu [128]) Suppose that , 1 , 2 , are the credibility


distributions of fuzzy variables , 1 , 2 , , respectively. We say that {i }
converges in distribution to if i at any continuity point of .
Convergence in Mean vs. Convergence in Credibility
Theorem 2.61 (Liu [128]) Suppose that , 1 , 2 , are fuzzy variables defined on the credibility space (, P, Cr). If the sequence {i } converges in
mean to , then {i } converges in credibility to .
Proof: It follows from Theorem 2.56 that, for any given number > 0,
Cr {|i | }

E[|i |]
0

as i . Thus {i } converges in credibility to .


Example 2.65: Convergence in credibility does not imply convergence in
mean. For example, take (, P, Cr) to be {1 , 2 , } with Cr{1 } = 1/2 and
Cr{j } = 1/j for j = 2, 3, The fuzzy variables are defined by
(
i, if j = i
i (j ) =
0, otherwise
for i = 1, 2, and = 0. For any small number > 0, we have
Cr {|i | } =

1
0.
i

That is, the sequence {i } converges in credibility to . However,


E [|i |] 1 6 0.
That is, the sequence {i } does not converge in mean to .
Convergence Almost Surely vs. Convergence in Credibility
Example 2.66: Convergence a.s. does not imply convergence in credibility.
For example, take (, P, Cr) to be {1 , 2 , } with Cr{j } = j/(2j + 1) for
j = 1, 2, The fuzzy variables are defined by
(
i, if j = i
i (j ) =
0, otherwise

112

Chapter 2 - Credibility Theory

for i = 1, 2, and = 0. Then the sequence {i } converges a.s. to .


However, for any small number > 0, we have
Cr {|i | } =

i
1
.
2i + 1
2

That is, the sequence {i } does not converge in credibility to .


Theorem 2.62 (Wang and Liu [221]) Suppose that , 1 , 2 , are fuzzy
variables defined on the credibility space (, P, Cr). If the sequence {i } converges in credibility to , then {i } converges a.s. to .
Proof: If {i } does not converge a.s. to , then there exists an element
with Cr{ } > 0 such that i ( ) 6 ( ) as i . In other words,
there exists a small number > 0 and a subsequence {ik ( )} such that
|ik ( ) ( )| for any k. Since credibility measure is an increasing set
function, we have
Cr {|ik | } Cr{ } > 0
for any k. It follows that {i } does not converge in credibility to . A
contradiction proves the theorem.
Convergence in Credibility vs. Convergence in Distribution
Theorem 2.63 (Wang and Liu [221]) Suppose that , 1 , 2 , are fuzzy
variables. If the sequence {i } converges in credibility to , then {i } converges in distribution to .
Proof: Let x be any given continuity point of the distribution . On the
one hand, for any y > x, we have
{i x} = {i x, y} {i x, > y} { y} {|i | y x}.
It follows from the credibility subadditivity theorem that
i (x) (y) + Cr{|i | y x}.
Since {i } converges in credibility to , we have Cr{|i | y x} 0.
Thus we obtain lim supi i (x) (y) for any y > x. Letting y x, we
get
lim sup i (x) (x).
(2.92)
i

On the other hand, for any z < x, we have


{ z} = { z, i x} { z, i > x} {i x} {|i | x z}
which implies that
(z) i (x) + Cr{|i | x z}.

Section 2.14 - Convergence Concepts

113

Since Cr{|i | x z} 0, we obtain (z) lim inf i i (x) for any


z < x. Letting z x, we get
(x) lim inf i (x).
i

(2.93)

It follows from (2.92) and (2.93) that i (x) (x). The theorem is proved.
Example 2.67: Convergence in distribution does not imply convergence
in credibility. For example, take (, P, Cr) to be {1 , 2 } with Cr{1 } =
Cr{2 } = 1/2, and define

1, if = 1
() =
1, if = 2 .
We also define i = for i = 1, 2, Then i and are identically distributed. Thus {i } converges in distribution to . But, for any small number
> 0, we have Cr{|i | > } = Cr{} = 1. That is, the sequence {i }
does not converge in credibility to .
Convergence Almost Surely vs. Convergence in Distribution
Example 2.68: Convergence in distribution does not imply convergence a.s.
For example, take (, P, Cr) to be {1 , 2 } with Cr{1 } = Cr{2 } = 1/2, and
define

1, if = 1
() =
1, if = 2 .
We also define i = for i = 1, 2, Then {i } converges in distribution
to . However, {i } does not converge a.s. to .
Example 2.69: Convergence a.s. does not imply convergence in distribution.
For example, take (, P, Cr) to be {1 , 2 , } with Cr{j } = j/(2j + 1) for
j = 1, 2, The fuzzy variables are defined by
(
i, if j = i
i (j ) =
0, otherwise
for i = 1, 2, and = 0. Then the sequence {i } converges a.s. to .
However, the credibility distributions of i are

0,
if x < 0

(i + 1)/(2i + 1), if 0 x < i


i (x) =

1,
if x i,
i = 1, 2, , respectively. The credibility distribution of is
(
0, if x < 0
(x) =
1, if x 0.

114

Chapter 2 - Credibility Theory

It is clear that i (x) 6 (x) at x > 0. That is, the sequence {i } does not
converge in distribution to .

2.15

Conditional Credibility

We now consider the credibility of an event A after it has been learned that
some other event B has occurred. This new credibility of A is called the
conditional credibility of A given B.
The first problem is whether the conditional credibility is determined
uniquely and completely. The answer is negative. It is doomed to failure to
define an unalterable and widely accepted conditional credibility. For this
reason, it is appropriate to speak of a certain persons subjective conditional
credibility, rather than to speak of the true conditional credibility.
In order to define a conditional credibility measure Cr{A|B}, at first we
have to enlarge Cr{A B} because Cr{A B} < 1 for all events whenever
Cr{B} < 1. It seems that we have no alternative but to divide Cr{A B} by
Cr{B}. Unfortunately, Cr{AB}/Cr{B} is not always a credibility measure.
However, the value Cr{A|B} should not be greater than Cr{A B}/Cr{B}
(otherwise the normality will be lost), i.e.,
Cr{A|B}

Cr{A B}
.
Cr{B}

(2.94)

On the other hand, in order to preserve the self-duality, we should have


Cr{A|B} = 1 Cr{Ac |B} 1

Cr{Ac B}
.
Cr{B}

(2.95)

Furthermore, since (A B) (Ac B) = B, we have Cr{B} Cr{A B} +


Cr{Ac B} by using the credibility subadditivity theorem. Thus
01

Cr{Ac B}
Cr{A B}

1.
Cr{B}
Cr{B}

(2.96)

Hence any numbers between 1 Cr{Ac B}/Cr{B} and Cr{A B}/Cr{B}


are reasonable values that the conditional credibility may take. Based on the
maximum uncertainty principle, we have the following conditional credibility
measure.
Definition 2.32 (Liu [132]) Let (, P, Cr) be a credibility space, and A, B
P. Then the conditional credibility measure of A given B is defined by

Cr{A B}
Cr{A B}

,
if
< 0.5

Cr{B}
Cr{B}

Cr{Ac B}
Cr{Ac B}
(2.97)
Cr{A|B} =
1
, if
< 0.5

Cr{B}
Cr{B}

0.5,
otherwise

Section 2.15 - Conditional Credibility

115

provided that Cr{B} > 0.


It follows immediately from the definition of conditional credibility that
1

Cr{Ac B}
Cr{A B}
Cr{A|B}
.
Cr{B}
Cr{B}

(2.98)

Furthermore, the value of Cr{A|B} takes values as close to 0.5 as possible


in the interval. In other words, it accords with the maximum uncertainty
principle.
Theorem 2.64 (Liu [132]) Let (, P, Cr) be a credibility space, and B an
event with Cr{B} > 0. Then Cr{|B} defined by (2.97) is a credibility measure, and (, P, Cr{|B}) is a credibility space.
Proof: It is sufficient to prove that Cr{|B} satisfies the normality, monotonicity, self-duality and maximality axioms. At first, it satisfies the normality axiom, i.e.,
Cr{|B} = 1

Cr{c B}
Cr{}
=1
= 1.
Cr{B}
Cr{B}

For any events A1 and A2 with A1 A2 , if


Cr{A1 B}
Cr{A2 B}

< 0.5,
Cr{B}
Cr{B}
then
Cr{A1 |B} =
If

Cr{A1 B}
Cr{A2 B}

= Cr{A2 |B}.
Cr{B}
Cr{B}

Cr{A2 B}
Cr{A1 B}
0.5
,
Cr{B}
Cr{B}

then Cr{A1 |B} 0.5 Cr{A2 |B}. If


0.5 <

Cr{A1 B}
Cr{A2 B}

,
Cr{B}
Cr{B}

then we have




Cr{Ac1 B}
Cr{Ac2 B}
Cr{A1 |B} = 1
0.5 1
0.5 = Cr{A2 |B}.
Cr{B}
Cr{B}
This means that Cr{|B} satisfies the monotonicity axiom. For any event A,
if
Cr{A B}
Cr{Ac B}
0.5,
0.5,
Cr{B}
Cr{B}

116

Chapter 2 - Credibility Theory

then we have Cr{A|B} + Cr{Ac |B} = 0.5 + 0.5 = 1 immediately. Otherwise,


without loss of generality, suppose
Cr{A B}
Cr{Ac B}
< 0.5 <
,
Cr{B}
Cr{B}
then we have


Cr{A B}
Cr{A B}
+ 1
= 1.
Cr{A|B} + Cr{A |B} =
Cr{B}
Cr{B}
c

That is, Cr{|B} satisfies the self-duality axiom. Finally, for any events {Ai }
with supi Cr{Ai |B} < 0.5, we have supi Cr{Ai B} < 0.5 and
sup Cr{Ai |B} =
i

supi Cr{Ai B}
Cr{i Ai B}
=
= Cr{i Ai |B}.
Cr{B}
Cr{B}

Thus Cr{|B} satisfies the maximality axiom. Hence Cr{|B} is a credibility


measure. Furthermore, (, P, Cr{|B}) is a credibility space.
Example 2.70: Let be a fuzzy variable, and X a set of real numbers such
that Cr{ X} > 0. Then for any x X, the conditional credibility of
= x given X is

Cr { = x| X} =

Cr{ = x}
,
Cr{ X}

if

Cr{ = x}
< 0.5
Cr{ X}

Cr{ 6= x, X}
Cr{ 6= x, X}
, if
< 0.5
Cr{ X}
Cr{ X}
0.5,

otherwise.

Example 2.71: Let and be two fuzzy variables, and Y a set of real
numbers such that Cr{ Y } > 0. Then we have

Cr{ = x, Y }
Cr{ = x, Y }

,
if
< 0.5

Cr{ Y }
Cr{ Y }

Cr{ 6= x, Y }
Cr{ 6= x, Y }
Cr { = x| Y } =
1
, if
< 0.5

Cr{

Y
}
Cr{ Y }

0.5,
otherwise.
Definition 2.33 (Liu [132]) The conditional membership function of a fuzzy
variable given B is defined by
(x|B) = (2Cr{ = x|B}) 1,
provided that Cr{B} > 0.

x<

(2.99)

117

Section 2.15 - Conditional Credibility

Example 2.72: Let be a fuzzy variable with membership function (x),


and X a set of real numbers such that (x) > 0 for some x X. Then the
conditional membership function of given X is

2(x)

1,
if sup (x) < 1

sup
(x)
xX

xX
(x|X) =
(2.100)

2(x)

1,
if
sup
(x)
=
1

2 sup (x)
xX
xX c

for x X. Please mention that (x|X) 0 if x 6 X.


(x|X)

(x|X)
....
........
.......................................
...
.......................................................................
.
...
...
...
... .....
...
...
...
...
...
...
..
...
...
.
...
...
...
...
..
.
.
...
.
...
...
..
..
.
...
.
.
...
.
...
..
...
...
.
.
...
.
...
..
...
..
.
.
.
...
...
.. .....
... ....
...
... ...
.. ...
...
... ...
.......
...
.
... ...
.
... ...
...
....
... ..
... ..... ..
... ...
... .... ..
.....
..
......
... ...
.
.
.
.
.
..................................................................................................................................................................
.. .
..
..............
............................................
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
...

....
........
........................................
..........................
.
...
...
...... .
... ..... ..
...
...
... ..
...
...
...
....
...
.
...
.
...
....
..
...
.
.
...
.
..
...
........
.
.
...
...
...
..
.
.
...
.
.
...
..
... .....
.
.
...
...
...
..
.
.
...
.
...
...
..
...
.
.
..
.
...
...
..
.
.
.
...
.
... .....
..
...
.
.
.
...
... ...
.
..
... ..
... .....
... ...
..
... ....
... ..
..
......
... ...
.....................................................................................................................................................................
.. .
.
..
......................
.....................
...
...

Figure 2.6: Conditional Membership Function (x|X)


Example 2.73: Let and be two fuzzy variables with joint membership function (x, y), and Y a set of real numbers. Then the conditional
membership function of given Y is

2 sup (x, y)

yY

1,
if sup (x, y) < 1

sup
(x, y)

x<,yY

x<,yY
(x|Y ) =
(2.101)

2 sup (x, y)

yY

sup (x, y) = 1

2 sup (x, y) 1, if x<,yY

x<,yY c

provided that (x, y) > 0 for some x < and y Y . Especially, the
conditional membership function of given = y is

2(x, y)

1,
if sup (x, y) < 1

sup
(x, y)
x<

x<
(x|y) =

2(x, y)

1, if sup (x, y) = 1

2 sup (x, z)
x<
x<,z6=y

118

Chapter 2 - Credibility Theory

provided that (x, y) > 0 for some x <.


Definition 2.34 (Liu [132]) The conditional credibility distribution : <
[0, 1] of a fuzzy variable given B is defined by
(x|B) = Cr { x|B}

(2.102)

provided that Cr{B} > 0.


Example 2.74: Let and be fuzzy variables. Then the conditional credibility distribution of given = y is

Cr{ x, = y}
Cr{ x, = y}

,
if
< 0.5

Cr{ = y}
Cr{ = y}

Cr{ > x, = y}
Cr{ > x, = y}
(x| = y) =
, if
< 0.5
1

Cr{ = y}
Cr{ = y}

0.5,
otherwise
provided that Cr{ = y} > 0.
Definition 2.35 (Liu [132]) The conditional credibility density function
of a fuzzy variable given B is a nonnegative function such that
Z x
(x|B) =
(y|B)dy, x <,
(2.103)

(y|B)dy = 1

(2.104)

where (x|B) is the conditional credibility distribution of given B.


Definition 2.36 (Liu [132]) Let be a fuzzy variable. Then the conditional
expected value of given B is defined by
Z +
Z 0
E[|B] =
Cr{ r|B}dr
Cr{ r|B}dr
(2.105)

provided that at least one of the two integrals is finite.


Following conditional credibility and conditional expected value, we also
have conditional variance, conditional moments, conditional critical values,
conditional entropy as well as conditional convergence.
Definition 2.37 Let be a nonnegative fuzzy variable representing lifetime.
Then the hazard rate (or failure rate) is

h(x) = lim Cr{ x + > x}.
(2.106)
0

119

Section 2.16 - Fuzzy Process

The hazard rate tells us the credibility of a failure just after time x when it
is functioning at time x.
Example 2.75: Let be an exponentially distributed fuzzy variable. Then
its hazard rate h(x) 0.5. In fact, the hazard rate is always 0.5 if the
membership function is positive and decreasing.
Example 2.76: Let be a triangular fuzzy variable (a, b, c) with a 0.
Then its hazard rate

0,
if x a

xa

, if a x (a + b)/2
ba
h(x) =

0.5,
if (a + b)/2 x < c

0,
if x c.

2.16

Fuzzy Process

Definition 2.38 Let T be an index set and let (, P, Cr) be a credibility


space. A fuzzy process is a function from T (, P, Cr) to the set of real
numbers.
That is, a fuzzy process Xt () is a function of two variables such that the
function Xt () is a fuzzy variable for each t . For each fixed , the function
Xt ( ) is called a sample path of the fuzzy process. A fuzzy process Xt () is
said to be sample-continuous if the sample path is continuous for almost all
.
Definition 2.39 A fuzzy process Xt is said to have independent increments
if
Xt1 Xt0 , Xt2 Xt1 , , Xtk Xtk1
(2.107)
are independent fuzzy variables for any times t0 < t1 < < tk . A fuzzy
process Xt is said to have stationary increments if, for any given t > 0, the
Xs+t Xs are identically distributed fuzzy variables for all s > 0.
Fuzzy Renewal Process
Definition 2.40 (Zhao and Liu [251]) Let 1 , 2 , be iid positive fuzzy
variables. Define S0 = 0 and Sn = 1 + 2 + + n for n 1. Then the
fuzzy process


Nt = max n Sn t
(2.108)
n0

is called a fuzzy renewal process.

120

Chapter 2 - Credibility Theory

N. t
4
3
2
1
0

...
..........
...
..
...........
..............................
..
...
...
..
..
...
..........
.........................................................
..
...
..
..
....
..
..
..
..
..
..........
.......................................
..
...
..
..
..
...
..
..
...
..
..
..
..
...
..
.........................................................
.........
..
..
..
..
..
....
..
..
..
..
...
..
..
..
.
..
.
......................................................................................................................................................................................................................................
...
...
...
...
...
....
....
....
....
...
1 ...
2
3 ...
4
...
...
...
..
..
..
..
..

S0

S1

S2

S3

S4

Figure 2.7: A Sample Path of Fuzzy Renewal Process


If 1 , 2 , denote the interarrival times of successive events. Then Sn
can be regarded as the waiting time until the occurrence of the nth event,
and Nt is the number of renewals in (0, t]. Each sample path of Nt is a
right-continuous and increasing step function taking only nonnegative integer
values. Furthermore, the size of each jump of Nt is always 1. In other words,
Nt has at most one renewal at each time. In particular, Nt does not jump at
time 0. Since Nt n if and only if Sn t, we have


t
Cr{Nt n} = Cr{Sn t} = Cr 1
.
(2.109)
n
Theorem 2.65 Let Nt be a fuzzy renewal process with interarrival times
1 , 2 , Then we have



X
X
t
Cr 1
Cr{Sn t} =
E[Nt ] =
.
(2.110)
n
n=1
n=1
Proof: Since Nt takes only nonnegative integer values, we have
Z
Z n
X
E[Nt ] =
Cr{Nt r}dr =
Cr{Nt r}dr
0

n=1

n1



t
Cr 1
=
Cr{Nt n} =
Cr{Sn t} =
.
n
n=1
n=1
n=1
The theorem is proved.
Example 2.77: A renewal process Nt is called an equipossible renewal process if 1 , 2 , are iid equipossible fuzzy variables (a, b) with a > 0. Then
for each nonnegative integer n, we have

0, if t < na
0.5, if na t < nb
(2.111)
Cr{Nt n} =

1, if t nb,

121

Section 2.16 - Fuzzy Process

1
E[Nt ] =
2

   
t
t
+
a
b

(2.112)

where bxc represents the maximal integer less than or equal to x.


Example 2.78: A renewal process Nt is called a triangular renewal process
if 1 , 2 , are iid triangular fuzzy variables (a, b, c) with a > 0. Then for
each nonnegative integer n, we have

Cr{Nt n} =

if t na

0,
t na
,
2n(b a)

if na t nb

nc 2nb + t

, if nb t nc

2n(c b)

1,
if t nc.

(2.113)

Theorem 2.66 (Zhao and Liu [251], Renewal Theorem) Let Nt be a fuzzy
renewal process with interarrival times 1 , 2 , Then
 
1
E[Nt ]
=E
.
t
t
1
lim

(2.114)

Proof: Since 1 is a positive fuzzy variable and Nt is a nonnegative fuzzy


variable, we have
E[Nt ]
=
t


Cr

 Z 

1
1
E
=
r dr.
Cr
1
1
0


Nt
r dr,
t

It is easy to verify that



Cr

Nt
r
t


Cr

1
r
1

and

lim Cr

Nt
r
t

= Cr

1
r
1

for any real numbers t > 0 and r > 0. It follows from Lebesgue dominated
convergence theorem that
Z

lim


Cr

Hence (2.114) holds.



Z 
Nt
1
r dr =
Cr
r dr.
t
1
0

122

Chapter 2 - Credibility Theory

C Process
This subsection introduces a fuzzy counterpart of Brownian motion, and
provides some basic mathematical properties.
Definition 2.41 (Liu [133]) A fuzzy process Ct is said to be a C process if
(i) C0 = 0,
(ii) Ct has stationary and independent increments,
(iii) every increment Cs+t Cs is a normally distributed fuzzy variable with
expected value et and variance 2 t2 .
The parameters e and are called the drift and diffusion coefficients,
respectively. The C process is said to be standard if e = 0 and = 1. Any C
process may be represented by et + Ct where Ct is a standard C process.
C. t

.....
.......
....
...
...
...
...
...
... ... .
...
... .. .....
.. .............
...
.....
.
.
.
...
......
... ... ....
..... ...
... ...
.. ...
.. .. ..
.. ..
...
...
...
.....
.. .... ......
.. .... ........
.. ........
...
.
.
.
.. ....
...
.
....
.
.
.
.
.
.
.. . . . . . .
.
...
...
..
...
...... ..... .. ...... .. ..... .. .............
... .. ....
.
...
.. ..... .... . ... ..... ... ....... . ....... .........
...
... ... ....
...
.....
....
...
..
... ..... ... .. ... ...
...
..... ... ..
....... ... ...
...
...
......
..
.. ...... .... ....
... .... ........ ......
.
.
..
.
.
.
...
.
.
.
.
.
.
.
.
.
.
.. ..
.....
..... ..... .... .....
.. ....... .... ....
...
...
......
...
...... . .....
..
... ....
...
..
...
.
...
...
.. ...
... ...
...
... ...
... ... ......
...
......
...
.. ....... ..
...... ...
.
.
....
...
.. .
.
...... ....
...
... ..... ...
...... .... ...... ...
......... ...
...... .............
..... ... ..
...... .....
..... ....
......
....
...
.........................................................................................................................................................................................................................................................................................

Figure 2.8: A Sample Path of Standard C Process


Perhaps the readers would like to know why the increment is a normally
distributed fuzzy variable. The reason is that a normally distributed fuzzy
variable has maximum entropy when its expected value and variance are
given, just like a normally distributed random variable.
Theorem 2.67 (Liu [133], Existence Theorem) There is a C process. Furthermore, each version of C process is sample-continuous.
Proof: Without loss of generality, we only prove that there is a standard C
process on the range of t [0, 1]. Let



(r) r represents rational numbers in [0, 1]
be a countable sequence of independently and normally distributed fuzzy
variables with expected value zero and variance one. For each integer n, we
define a fuzzy process

 
k
X

i
k

, if t =
(k = 0, 1, , n)
n
n
n
Xn (t) =
i=1

linear,
otherwise.

123

Section 2.16 - Fuzzy Process

Since the limit


lim Xn (t)

exists almost surely, we may verify that the limit meets the conditions of C
process. Hence there is a standard C process.
Remark 2.9: Suppose that Ct is a standard C process. It has been proved
that
X1 (t) = Ct ,
(2.115)
X2 (t) = aCt/a ,

(2.116)

X3 (t) = Ct+s Cs

(2.117)

are each a version of standard C process.


Dai [23] proved that almost all C paths are Lipschitz continuous and
have a finite variation. Thus almost all C paths are differentiable almost
everywhere and have zero squared variation. However, for any number `
(0, 0.5), there is a sample with Cr{} = ` such that Ct () is not differentiable
in a dense set of t.
As a byproduct, C path is an example of Lipschitz continuous function
whose nondifferentiable points are dense, while Brownian path is an example
of continuous function that is nowhere differentiable. In other words, C path
is the worst Lipschitz continuous function and Brownian path is the worst
continuous function in the sense of differentiability.
Example 2.79: Let Ct be a C process with drift 0. Then for any level x > 0
and any time t > 0, Dai [23] proved the following reflection principle,

Cr


max Cs x = Cr{Ct x}.

0st

In addition, for any level x < 0 and any time t > 0, we have


Cr min Cs x = Cr{Ct x}.
0st

(2.118)

(2.119)

Example 2.80: Let Ct be a C process with drift e > 0 and diffusion coefficient . Then the first passage time that the C process reaches the barrier
x > 0 has the membership function

(t) = 2 1 + exp

|x et|

6t

whose expected value E[ ] = + (Dai [23]).

1
,

t>0

(2.120)

124

Chapter 2 - Credibility Theory

Definition 2.42 (Liu [133]) Let Ct be a standard C process. Then et + Ct


is a C process, and the fuzzy process
Gt = exp(et + Ct )

(2.121)

is called a geometric C process.


Geometric C process Gt is employed to model stock prices. For each t > 0,
Li [108] proved that Gt has a lognormal membership function
1


| ln z et|

, z0
(2.122)
(z) = 2 1 + exp
6t
whose expected value is

E[Gt ] = exp(et) csc( 6t) 6t,

t < /( 6).

(2.123)

In addition, the first passage time that a geometric C process Gt reaches the
barrier x > 1 is just the time that the C process with drift e and diffusion
reaches ln x.
A Basic Stock Model
It was assumed that stock price follows geometric Brownian motion, and
stochastic financial mathematics was then founded based on this assumption. Liu [133] presented an alternative assumption that stock price follows
geometric C process. Based on this assumption, we obtain a basic stock
model for fuzzy financial market in which the bond price Xt and the stock
price Yt follow
(
Xt = X0 exp(rt)
(2.124)
Yt = Y0 exp(et + Ct )
where r is the riskless interest rate, e is the stock drift, is the stock diffusion,
and Ct is a standard C process. It is just a fuzzy counterpart of Black-Scholes
stock model [8]. For exploring further development of fuzzy stock models,
the interested readers may consult Gao [50], Peng [191], Qin and Li [194],
and Zhu [267].

2.17

Fuzzy Calculus

Let Ct be a standard C process, and dt an infinitesimal time interval. Then


dCt = Ct+dt Ct
is a fuzzy process such that, for each t, the dCt is a normally distributed
fuzzy variable with
E[dCt ] = 0, V [dCt ] = dt2 ,
E[dCt2 ] = dt2 ,

V [dCt2 ] 7dt4 .

125

Section 2.17 - Fuzzy Calculus

Definition 2.43 (Liu [133]) Let Xt be a fuzzy process and let Ct be a standard C process. For any partition of closed interval [a, b] with a = t1 < t2 <
< tk+1 = b, the mesh is written as
= max |ti+1 ti |.
1ik

Then the fuzzy integral of Xt with respect to Ct is


Z b
k
X
Xt dCt = lim
Xti (Cti+1 Cti )
0

(2.125)

i=1

provided that the limit exists almost surely and is a fuzzy variable.
Example 2.81: Let Ct be a standard C process. Then for any partition
0 = t1 < t2 < < tk+1 = s, we have
Z

k
X

dCt = lim

(Cti+1 Cti ) Cs C0 = Cs .

i=1

Example 2.82: Let Ct be a standard C process. Then for any partition


0 = t1 < t2 < < tk+1 = s, we have
sCs =

k
X

ti+1 Cti+1 ti Cti

i=1

k
X

ti (Cti+1 Cti ) +

i=1
Z s

k
X

Cti+1 (ti+1 ti )

i=1

tdCt +
0

Ct dt
0

as 0. It follows that
Z

tdCt = sCs
0

Ct dt.
0

Example 2.83: Let Ct be a standard C process. Then for any partition


0 = t1 < t2 < < tk+1 = s, we have
Cs2 =

k 
X

Ct2i+1 Ct2i

i=1

k
X

Cti+1 Cti

i=1

2

+2

k
X
i=1

Z
0+2

Ct dCt
0

Cti Cti+1 Cti

126

Chapter 2 - Credibility Theory

as 0. That is,

Ct dCt =
0

1 2
C .
2 s

This equation shows that fuzzy integral does not behave like Ito integral. In
fact, the fuzzy integral behaves like ordinary integrals.
Theorem 2.68 (Liu [133]) Let Ct be a standard C process, and let h(t, c) be
a continuously differentiable function. Define Xt = h(t, Ct ). Then we have
the following chain rule
dXt =

h
h
(t, Ct )dt +
(t, Ct )dCt .
t
c

(2.126)

Proof: Since the function h is continuously differentiable, by using Taylor


series expansion, the infinitesimal increment of Xt has a first-order approximation
h
h
Xt =
(t, Ct )t +
(t, Ct )Ct .
t
c
Hence we obtain the chain rule because it makes
Z s
Z s
h
h
Xs = X0 +
(t, Ct )dt +
(t, Ct )dCt
t
0
0 c
for any s 0.
Remark 2.10: The infinitesimal increment dCt in (2.126) may be replaced
with the derived C process
dYt = ut dt + vt dCt

(2.127)

where ut and vt are absolutely integrable fuzzy processes, thus producing


dh(t, Yt ) =

h
h
(t, Yt )dt +
(t, Yt )dYt .
t
c

(2.128)

Remark 2.11: Assume that C1t , C2t , , Cmt are standard C processes,
and h(t, c1 , c2 , , cm ) is a continuously differentiable function. Define
Xt = h(t, C1t , C2t , , Cmt ).
You [243] proved the following chain rule
dXt =

m
X
h
h
dt +
dCit .
t
c
i
i=1

(2.129)

Example 2.84: Applying the chain rule, we obtain the following formula
d(tCt ) = Ct dt + tdCt .

127

Section 2.18 - Fuzzy Differential Equation

Hence we have
s

Z
sCs =

Z
d(tCt ) =

Ct dt +
0

tdCt .
0

That is,
s

Z
tdCt = sCs

Ct dt.
0

Example 2.85: Let Ct be a standard C process. By using the chain rule


d(Ct2 ) = 2Ct dCt ,
we get
Cs2 =

d(Ct2 ) = 2

Ct dCt .

It follows that

Ct dCt =
0

1 2
C .
2 s

Example 2.86: Let Ct be a standard C process. Then we have the following


chain rule
d(Ct3 ) = 3Ct2 dCt .
Thus we obtain
Cs3

d(Ct3 )

Z
=3

That is
Z

Ct2 dCt .

0
s

Ct2 dCt =

1 3
C .
3 s

Theorem 2.69 (Liu [133], Integration by Parts) Suppose that Ct is a standard C process and F (t) is an absolutely continuous function. Then
Z s
Z s
F (t)dCt = F (s)Cs
Ct dF (t).
(2.130)
0

Proof: By defining h(t, Ct ) = F (t)Ct and using the chain rule, we get
d(F (t)Ct ) = Ct dF (t) + F (t)dCt .
Thus
Z
F (s)Cs =

Z
d(F (t)Ct ) =

which is just (2.130).

Z
Ct dF (t) +

F (t)dCt
0

128

2.18

Chapter 2 - Credibility Theory

Fuzzy Differential Equation

Fuzzy differential equation is a type of differential equation driven by C process.


Definition 2.44 (Liu [133]) Suppose Ct is a standard C process, and f and
g are some given functions. Then
dXt = f (t, Xt )dt + g(t, Xt )dCt

(2.131)

is called a fuzzy differential equation. A solution is a fuzzy process Xt that


satisfies (2.131) identically in t.
Remark 2.12: Note that there is no precise definition for the terms dXt ,
dt and dCt in the fuzzy differential equation (2.131). The mathematically
meaningful form is the fuzzy integral equation
Z s
Z s
Xs = X0 +
f (t, Xt )dt +
g(t, Xt )dCt .
(2.132)
0

However, the differential form is more convenient for us. This is the main
reason why we accept the differential form.
Example 2.87: Let Ct be a standard C process. Then the fuzzy differential
equation
dXt = adt + bdCt
has a solution
Xt = at + bCt
which is just a C process with drift coefficient a and diffusion coefficient b.
Example 2.88: Let Ct be a standard C process. Then the fuzzy differential
equation
dXt = aXt dt + bXt dCt
has a solution
Xt = exp (at + bCt )
which is just a geometric C process.
Example 2.89: Let Ct be a standard C process. Then the fuzzy differential
equations
(
dXt = Yt dCt
dYt = Xt dCt
have a solution
(Xt , Yt ) = (cos Ct , sin Ct )
which is called a C process on unit circle since Xt2 + Yt2 1.

Chapter 3

Chance Theory
Fuzziness and randomness are two basic types of uncertainty. In many cases,
fuzziness and randomness simultaneously appear in a system. In order to
describe this phenomena, a fuzzy random variable was introduced by Kwakernaak [87][88] as a random element taking fuzzy variable values. In addition, a random fuzzy variable was proposed by Liu [124] as a fuzzy element
taking random variable values. For example, it might be known that the
lifetime of a modern engine is an exponentially distributed random variable
with an unknown parameter. If the parameter is provided as a fuzzy variable,
then the lifetime is a random fuzzy variable.
More generally, a hybrid variable was introduced by Liu [130] as a tool
to describe the quantities with fuzziness and randomness. Fuzzy random
variable and random fuzzy variable are instances of hybrid variable. In order
to measure hybrid events, a concept of chance measure was introduced by Li
and Liu [103]. Chance theory is a hybrid of probability theory and credibility
theory. Perhaps the reader would like to know what axioms we should assume
for chance theory. In fact, chance theory will be based on the three axioms
of probability and four axioms of credibility.
The emphasis in this chapter is mainly on chance space, hybrid variable,
chance measure, chance distribution, independence, identical distribution,
expected value, variance, moments, critical values, entropy, distance, convergence almost surely, convergence in chance, convergence in mean, convergence
in distribution, conditional chance, hybrid process, hybrid calculus, and hybrid differential equation.

3.1

Chance Space

Chance theory begins with the concept of chance space that inherits the
mathematical foundations of both probability theory and credibility theory.

130

Chapter 3 - Chance Theory

Definition 3.1 (Liu [130]) Suppose that (, P, Cr) is a credibility space and
(, A, Pr) is a probability space. The product (, P, Cr) (, A, Pr) is called
a chance space.
The universal set is clearly the set of all ordered pairs of the form
(, ), where and . What is the product -algebra P A? What
is the product measure Cr Pr? Let us discuss these two basic problems.
What is the product -algebra

P A?

Generally speaking, it is not true that all subsets of are measurable.


Let be a subset of . Write



(3.1)
() = (, ) .
It is clear that () is a subset of . If () A holds for each , then
may be regarded as a measurable set.
Definition 3.2 (Liu [132]) Let (, P, Cr) (, A, Pr) be a chance space. A
subset is called an event if () A for each .
Example 3.1: Empty set and universal set are clearly events.
A. Then X Y is a subset of .
(
Y, if X
(X Y )() =
, if X c

Example 3.2: Let X


Since the set

P and Y

A for each , the rectangle X Y is an event.


Theorem 3.1 (Liu [132]) Let (, P, Cr) (, A, Pr) be a chance space. The
class of all events is a -algebra over , and denoted by P A.
Proof: At first, it is obvious that P A. For any event , we always
is in the -algebra

have

() A,

Thus for each , the set





c () = (, ) c = (())c A
which implies that c P A. Finally, let 1 , 2 , be events. Then for
each , we have
!
(
)

[
[
[




i () = (, )
i =
(, ) i A.
i=1

i=1

i=1

That is, the countable union i i P A. Hence

P A is a -algebra.

131

Section 3.1 - Chance Space

Example 3.3: When is countable, usually

P A is the power set.

Example 3.4: When is uncountable, for example = <2 , usually


P A is a -algebra between the Borel algebra and power set of <2 . Let X
be a nonempty Borel set and let Y be a non-Borel set of real numbers. It
follows from X Y 6 P A that P A is smaller than the power set. It is
also clear that Y X P A but Y X is not a Borel set. Hence P A is
larger than the Borel algebra.
What is the product measure Cr Pr?
Product probability is a probability measure, and product credibility is a
credibility measure. What is the product measure Cr Pr? We will call it
chance measure and define it as follows.
Definition 3.3 (Li and Liu [103]) Let (, P, Cr) (, A, Pr) be a chance
space. Then a chance measure of an event is defined as

sup(Cr{} Pr{()}),

if sup(Cr{} Pr{()}) < 0.5

(3.2)
Ch{} =

1 sup(Cr{} Pr{c ()}),

if sup(Cr{} Pr{()}) 0.5.

Example 3.5: Take a credibility space (, P, Cr) to be {1 , 2 } with Cr{1 } =


0.6 and Cr{2 } = 0.4, and take a probability space (, A, Pr) to be {1 , 2 }
with Pr{1 } = 0.7 and Pr{2 } = 0.3. Then
Ch{(1 , 1 )} = 0.6,

Ch{(2 , 2 )} = 0.3.

Example 3.6: Take a credibility space (, P, Cr) to be [0, 1] with Cr{}


1/2, and take a probability space (, A, Pr) to be [0, 1] with Borel algebra
and Lebesgue measure. Then for any real numbers a, b [0, 1], we have

b, if b < 0.5
0.5, if 0.5 b < 1
Ch{[0, a] [0, b]} =

1, if a = b = 1.
Theorem 3.2 Let (, P, Cr) (, A, Pr) be a chance space and Ch a chance
measure. Then we have
Ch{} = 0,
(3.3)

for any event .

Ch{ } = 1,

(3.4)

0 Ch{} 1

(3.5)

132

Chapter 3 - Chance Theory

Proof: It follows from the definition immediately.


Theorem 3.3 Let (, P, Cr) (, A, Pr) be a chance space and Ch a chance
measure. Then for any event , we have
sup(Cr{} Pr{()}) sup(Cr{} Pr{c ()}) 0.5,

(3.6)

sup(Cr{} Pr{()}) + sup(Cr{} Pr{c ()}) 1,

(3.7)

sup(Cr{} Pr{()}) Ch{} 1 sup(Cr{} Pr{c ()}).

(3.8)

Proof: It follows from the basic properties of probability and credibility that
sup(Cr{} Pr{()}) sup(Cr{} Pr{c ()})

sup(Cr{} (Pr{()} Pr{c ()}))

sup Cr{} 0.5 = 0.5

and

sup(Cr{} Pr{()}) + sup(Cr{} Pr{c ()})

sup (Cr{1 } Pr{(1 )} + Cr{2 } Pr{c (2 )})


1 ,2

sup (Cr{1 } + Cr{2 }) sup(Pr{()} + Pr{c ()})


1 6=2

1 1 = 1.
The inequalities (3.8) follows immediately from the above inequalities and
the definition of chance measure.
Theorem 3.4 (Li and Liu [103]) The chance measure is increasing. That
is,
Ch{1 } Ch{2 }
(3.9)
for any events 1 and 2 with 1 2 .
Proof: Since 1 () 2 () and c2 () c1 () for each , we have
sup(Cr{} Pr{1 ()}) sup(Cr{} Pr{2 ()}),

sup(Cr{}

Pr{c2 ()})

sup(Cr{} Pr{c1 ()}).

The argument breaks down into three cases.


Case 1: sup(Cr{} Pr{2 ()}) < 0.5. For this case, we have

sup(Cr{} Pr{1 ()}) < 0.5,

133

Section 3.1 - Chance Space

Ch{2 } = sup(Cr{} Pr{2 ()}) sup(Cr{} Pr{1 ()} = Ch{1 }.

Case 2: sup(Cr{} Pr{2 ()}) 0.5 and sup(Cr{} Pr{1 ()}) < 0.5.

It follows from Theorem 3.3 that


Ch{2 } sup(Cr{} Pr{2 ()}) 0.5 > Ch{1 }.

Case 3: sup(Cr{} Pr{2 ()}) 0.5 and sup(Cr{} Pr{1 ()}) 0.5.

For this case, we have


Ch{2 } = 1sup(Cr{}Pr{c2 ()}) 1sup(Cr{}Pr{c1 ()}) = Ch{1 }.

Thus Ch is an increasing measure.


Theorem 3.5 (Li and Liu [103]) The chance measure is self-dual. That is,
Ch{} + Ch{c } = 1

(3.10)

for any event .


Proof: For any event , please note that

c
if sup(Cr{} Pr{c ()}) < 0.5
sup(Cr{} Pr{ ()}),

c
Ch{ } =
1 sup(Cr{} Pr{()}), if sup(Cr{} Pr{c ()}) 0.5.

The argument breaks down into three cases.


Case 1: sup(Cr{} Pr{()}) < 0.5. For this case, we have

sup(Cr{} Pr{c ()}) 0.5,

Ch{} + Ch{c } = sup(Cr{} Pr{()}) + 1 sup(Cr{} Pr{()}) = 1.

Case 2: sup(Cr{} Pr{()}) 0.5 and sup(Cr{} Pr{c ()}) < 0.5.

For this case, we have


Ch{}+Ch{c } = 1sup(Cr{}Pr{c ()})+sup(Cr{}Pr{c ()}) = 1.

Case 3: sup(Cr{} Pr{()}) 0.5 and sup(Cr{} Pr{c ()}) 0.5.

For this case, it follows from Theorem 3.3 that


sup(Cr{} Pr{()}) = sup(Cr{} Pr{c ()}) = 0.5.

Hence Ch{} + Ch{c } = 0.5 + 0.5 = 1. The theorem is proved.

134

Chapter 3 - Chance Theory

Theorem 3.6 (Li and Liu [103]) For any event X Y , we have
Ch{X Y } = Cr{X} Pr{Y }.

(3.11)

Proof: The argument breaks down into three cases.


Case 1: Cr{X} < 0.5. For this case, we have
sup Cr{} Pr{Y } = Cr{X} Cr{Y } < 0.5,
X

Ch{X Y } = sup Cr{} Pr{Y } = Cr{X} Pr{Y }.


X

Case 2: Cr{X} 0.5 and Pr{Y } < 0.5. Then we have


sup Cr{} 0.5,
X

sup Cr{} Pr{Y } = Pr{Y } < 0.5,


X

Ch{X Y } = sup Cr{} Pr{Y } = Pr{Y } = Cr{X} Pr{Y }.


X

Case 3: Cr{X} 0.5 and Pr{Y } 0.5. Then we have


sup (Cr{} Pr{(X Y )()}) sup Cr{} Pr{Y } 0.5,

Ch{X Y } = 1 sup (Cr{} Pr{(X Y )c ()}) = Cr{X} Pr{Y }.

The theorem is proved.


Example 3.7: It follows from Theorem 3.6 that for any events X and
Y , we have
Ch{X } = Cr{X},

Ch{ Y } = Pr{Y }.

(3.12)

Theorem 3.7 (Li and Liu [103], Chance Subadditivity Theorem) The chance
measure is subadditive. That is,
Ch{1 2 } Ch{1 } + Ch{2 }

(3.13)

for any events 1 and 2 . In fact, chance measure is not only finitely subadditive but also countably subadditive.
Proof: The proof breaks down into three cases.
Case 1: Ch{1 2 } < 0.5. Then Ch{1 } < 0.5, Ch{2 } < 0.5 and
Ch{1 2 } = sup(Cr{} Pr{(1 2 )()})

sup(Cr{} (Pr{1 ()} + Pr{2 ()}))

sup(Cr{} Pr{1 ()} + Cr{} Pr{2 ()})

sup(Cr{} Pr{1 ()}) + sup(Cr{} Pr{2 ()})

= Ch{1 } + Ch{2 }.

135

Section 3.1 - Chance Space

Case 2: Ch{1 2 } 0.5 and Ch{1 } Ch{2 } < 0.5. We first have
sup(Cr{} Pr{(1 2 )()}) 0.5.

For any sufficiently small number > 0, there exists a point such that
Cr{} Pr{(1 2 )()} > 0.5 > Ch{1 } Ch{2 },
Cr{} > 0.5 > Pr{1 ()},
Cr{} > 0.5 > Pr{2 ()}.
Thus we have
Cr{} Pr{(1 2 )c ()} + Cr{} Pr{1 ()} + Cr{} Pr{2 ()}
= Cr{} Pr{(1 2 )c ()} + Pr{1 ()} + Pr{2 ()}
Cr{} Pr{(1 2 )c ()} + Pr{(1 2 )()} 1 2
because if Cr{} Pr{(1 2 )c ()}, then
Cr{} Pr{(1 2 )c ()} + Pr{(1 2 )()}
= Pr{(1 2 )c ()} + Pr{(1 2 )()}
= 1 1 2
and if Cr{} < Pr{(1 2 )c ()}, then
Cr{} Pr{(1 2 )c ()} + Pr{(1 2 )()}
= Cr{} + Pr{(1 2 )()}
(0.5 ) + (0.5 ) = 1 2.
Taking supremum on both sides and letting 0, we obtain
Ch{1 2 } = 1 sup(Cr{} Pr{(1 2 )c ()})

sup(Cr{} Pr{1 ()}) + sup(Cr{} Pr{2 ()})

= Ch{1 } + Ch{2 }.
Case 3: Ch{1 2 } 0.5 and Ch{1 } Ch{2 } 0.5. Without loss
of generality, suppose Ch{1 } 0.5. For each , we first have
Cr{} Pr{c1 ()} = Cr{} Pr{(c1 () c2 ()) (c1 () 2 ())}
Cr{} (Pr{(1 2 )c ()} + Pr{2 ()})
Cr{} Pr{(1 2 )c ()} + Cr{} Pr{2 ()},

136

Chapter 3 - Chance Theory

i.e., Cr{} Pr{(1 2 )c ()} Cr{} Pr{c1 ()} Cr{} Pr{2 ()}. It
follows from Theorem 3.3 that
Ch{1 2 } = 1 sup(Cr{} Pr{(1 2 )c ()})

1 sup(Cr{} Pr{c1 ()}) + sup(Cr{} Pr{2 ()})

Ch{1 } + Ch{2 }.
The theorem is proved.
Remark 3.1: For any events 1 and 2 , it follows from the chance subadditivity theorem that the chance measure is null-additive, i.e., Ch{1 2 } =
Ch{1 } + Ch{2 } if either Ch{1 } = 0 or Ch{2 } = 0.
Theorem 3.8 Let {i } be a decreasing sequence of events with Ch{i } 0
as i . Then for any event , we have
lim Ch{ i } = lim Ch{\i } = Ch{}.

(3.14)

Proof: Since chance measure is increasing and subadditive, we immediately


have
Ch{} Ch{ i } Ch{} + Ch{i }
for each i. Thus we get Ch{ i } Ch{} by using Ch{i } 0. Since
(\i ) ((\i ) i ), we have
Ch{\i } Ch{} Ch{\i } + Ch{i }.
Hence Ch{\i } Ch{} by using Ch{i } 0.
Theorem 3.9 (Li and Liu [103], Chance Semicontinuity Law) For events
1 , 2 , , we have
n
o
lim Ch{i } = Ch lim i
(3.15)
i

if one of the following conditions is satisfied:


(a) Ch{} 0.5 and i ; (b) lim Ch{i } < 0.5 and i ;
i

(c) Ch{} 0.5 and i ;

(d) lim Ch{i } > 0.5 and i .


i

Proof: (a) Assume Ch{} 0.5 and i . We first have


Ch{} = sup(Cr{} Pr{()}),

Ch{i } = sup(Cr{} Pr{i ()})

for i = 1, 2, For each , since i () (), it follows from the


probability continuity theorem that
lim Cr{} Pr{i ()} = Cr{} Pr{()}.

137

Section 3.2 - Hybrid Variables

Taking supremum on both sides, we obtain


lim sup(Cr{} Pr{i ()}) = sup(Cr{} Pr{(}).

The part (a) is verified.


(b) Assume limi Ch{i } < 0.5 and i . For each , since
Cr{} Pr{()} = lim Cr{} Pr{i ()},
i

we have
sup(Cr{} Pr{()}) lim sup(Cr{} Pr{i ()}) < 0.5.
i

It follows that Ch{} < 0.5 and the part (b) holds by using (a).
(c) Assume Ch{} 0.5 and i . We have Ch{c } 0.5 and ci c .
It follows from (a) that
lim Ch{i } = 1 lim Ch{ci } = 1 Ch{c } = Ch{}.

(d) Assume limi Ch{i } > 0.5 and i . We have lim Ch{ci } <
i

0.5 and ci c . It follows from (b) that

lim Ch{i } = 1 lim Ch{ci } = 1 Ch{c } = Ch{}.

The theorem is proved.


Theorem 3.10 (Chance Asymptotic Theorem) For any events 1 , 2 , ,
we have
lim Ch{i } 0.5, if i ,
(3.16)
i

lim Ch{i } 0.5,

if i .

(3.17)

Proof: Assume i . If limi Ch{i } < 0.5, it follows from the


chance semicontinuity law that
Ch{ } = lim Ch{i } < 0.5
i

which is in contradiction with Cr{ } = 1. The first inequality is proved.


The second one may be verified similarly.

3.2

Hybrid Variables

Recall that a random variable is a measurable function from a probability


space to the set of real numbers, and a fuzzy variable is a function from a
credibility space to the set of real numbers. In order to describe a quantity with both fuzziness and randomness, we introduce a concept of hybrid
variable as follows.

138

Chapter 3 - Chance Theory


...............................................................................
...........
...............
..........
........
........
.......
.......
......
.
.
.
.
....
.....
...
.
...
....
...
..
...
..
..
....
.
.
.
.
.
.
.......
.....
..
.
.
........
.
.
.
...... ...........
.
.
.
.
.
... ......
....... ..
....
....
..........
..........................
..
.......... .....
.....................................................................................
...
...
..
..
...
.
.
...
...
....
...
.
..
.........................................................
.........................................................
.
.
...
...
...
.
..
...
.
...
...
.
.
...
... ........
...
.
...
..
.
...
.
...
...
.
.
.....
...
...
.
.
.
.
..
.
.
.
...
...
.
.
.....
... ....
... ..
...
.
.
.
.
.
...
...
.
.
.
.
.....
... ....
... ..
...
.
.
.
.
........
.
.
....
.
.....
.....................................................................
...
.
.............................................................
.
.
.
.
.
.
.....
...
...
.
.
.
..
.
.
.
.
.
.
.
.....
...
.....
.....
...
...
.....
.....
...
...
.....
.....
...
.
...
.....
...
....................................................................................................................
..
.
...
..
..
.
.
...
..
..
.
.
...
..
..
...
.
.
..
...
..
.
.
.....
...
.
..
.
.
.
.
...
... ........
.
.
..
.
.
.
.
......
...
.....
.
..
.
.
.
.
.
.
.
......
...
............................................
..............................
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.........
......
.............
.....
...
.
.........
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
......
... ........
......
....
....
.... ...
.
.
.
.
.
.
.
.
.....
.
.
.
.
.
.
...
.
.
.
.
.
.
.
...........
.........
..
.....
..
...
...
.....
...
..............
..
...
...... ......
.
.
...
.
...
.
.
.
.
.
.
....
.
.
...
...
..
....
.
..
.
.
...
...
.
.
.
....
.
.
..
...
..
...
...
...
.
.
...
.
...
.
.
...
.
.
.
.
...
...
.
.
.
.
.
.
.
.....
.
.
.
.....
.
.....
.....
......
.....
......
.....
.....
.......
........
......
.......
.........
.......
...........
.........
................
............................................
...............................

Set of Real Numbers

Fuzzy
Variable

Hybrid
Variable

Credibility Space

Random
Variable

Probability Space

Figure 3.1: Graphical Representation of Hybrid Variable


Definition 3.4 (Liu [130]) A hybrid variable is a measurable function from
a chance space (, P, Cr) (, A, Pr) to the set of real numbers, i.e., for any
Borel set B of real numbers, the set

{ B} = {(, ) (, ) B}
(3.18)
is an event.
Remark 3.2: A hybrid variable degenerates to a fuzzy variable if the value
of (, ) does not vary with . For example,
(, ) = ,

(, ) = 2 + 1,

(, ) = sin .

Remark 3.3: A hybrid variable degenerates to a random variable if the


value of (, ) does not vary with . For example,
(, ) = ,

(, ) = 2 + 1,

(, ) = sin .

Remark 3.4: For each fixed , it is clear that the hybrid variable (, )
is a measurable function from the probability space (, A, Pr) to the set of
real numbers. Thus it is a random variable and we will denote it by (, ).
Then a hybrid variable (, ) may also be regarded as a function from a
credibility space (, P, Cr) to the set {(, )| } of random variables.
Thus is a random fuzzy variable defined by Liu [124].
Remark 3.5: For each fixed , it is clear that the hybrid variable
(, ) is a function from the credibility space (, P, Cr) to the set of real

139

Section 3.2 - Hybrid Variables

numbers. Thus it is a fuzzy variable and we will denote it by (, ). Then a


hybrid variable (, ) may be regarded as a function from a probability space
(, A, Pr) to the set {(, )| } of fuzzy variables. If Cr{(, ) B} is
a measurable function of for any Borel set B of real numbers, then is a
fuzzy random variable in the sense of Liu and Liu [140].
Model I
If a
is a fuzzy variable and is a random variable, then the sum = a
+ is
a hybrid variable. The product = a
is also a hybrid variable. Generally
speaking, if f : <2 < is a measurable function, then
= f (
a, )

(3.19)

is a hybrid variable. Suppose that a


has a membership function , and has
a probability density function . Then for any Borel set B of real numbers,
Qin and Liu [195] proved the following formula,
!

Z
(x)

sup

(y)dy ,

2
x

f (x,y)B

(x)

if sup

(y)dy < 0.5

2
x
f (x,y)B
Ch{f (
a, ) B} =
!
Z

(x)

(y)dy ,
1 sup

2
x
f (x,y)B c

(x)

(y)dy 0.5.
if sup

2
x
f (x,y)B
More generally, let a
1 , a
2 , , a
m be fuzzy variables, and let 1 , 2 , , n be
random variables. If f : <m+n < is a measurable function, then
= f (
a1 , a
2 , , a
m ; 1 , 2 , , n )

(3.20)

is a hybrid variable. The chance Ch{f (


a1 , a
2 , , a
m ; 1 , 2 , , n ) B}
may be calculated in a similar way provided that is the joint membership
function and is the joint probability density function.
Model II
Let a
1 , a
2 , , a
m be fuzzy variables, and let p1 , p2 , , pm be nonnegative
numbers with p1 + p2 + + pm = 1. Then

a
1 with probability p1

a
2 with probability p2
=
(3.21)

a
m with probability pm

140

Chapter 3 - Chance Theory

is clearly a hybrid variable. If a


1 , a
2 , , a
m have membership functions
1 , 2 , , m , respectively, then for any Borel set B of real numbers, we
have ([195])
!

 X

m

(x
)
i
i

{pi | xi B} ,
sup
min

1im
2

x1 ,x2 ,xm

i=1


 X

(x
)
i
i

if
sup
min

{pi | xi B} < 0.5

1im
2

x1 ,x2 ,xm
i=1
Ch{ B} =
!
 X

m

i (xi )

{pi | xi B } ,
1
sup
min

1im
2
x1 ,x2 ,xm

i=1

 X

m

(x
)

i
i

{pi | xi B} 0.5.
min

if x ,xsup
1im
2
1 2 ,xm
i=1
Model III
Let 1 , 2 , , m be random variables, and let u1 , u2 , , um be nonnegative
numbers with u1 u2 um = 1. Then

1 with membership degree u1

2 with membership degree u2


(3.22)
=

m with membership degree um


is clearly a hybrid variable. If 1 , 2 , , m have probability density functions 1 , 2 , , m , respectively, then for any Borel set B of real numbers,
we have ([195])



Z
ui

max

(x)dx
,

1im
2



Z

ui

if
max

(x)dx
< 0.5
i

1im
2
B
Ch{ B} =


Z

ui

max

(x)dx
,
i

1im
2

Bc



Z

ui

i (x)dx 0.5.
if max

1im
2
B
Model IV
In many statistics problems, the probability density function is completely
known except for the values of one or more parameters. For example, it

Section 3.2 - Hybrid Variables

141

might be known that the lifetime of a modern engine is an exponentially


distributed random variable with an unknown expected value . Usually,
there is some relevant information in practice. It is thus possible to specify
an interval in which the value of is likely to lie, or to give an approximate
estimate of the value of . It is typically not possible to determine the value
of exactly. If the value of is provided as a fuzzy variable, then is a
hybrid variable. More generally, suppose that has a probability density
function
(x; a
1 , a
2 , , a
m ), x <
(3.23)
in which the parameters a
1 , a
2 , , a
m are fuzzy variables rather than crisp
numbers. Then is a hybrid variable provided that (x; y1 , y2 , , ym ) is
a probability density function for any (y1 , y2 , , ym ) that (
a1 , a
2 , , a
m )
may take. If a
1 , a
2 , , a
m have membership functions 1 , 2 , , m , respectively, then for any Borel set B of real numbers, the chance Ch{ B}
is ([195])

 Z


i (yi )

sup
min

(x;
y
,
y
,

,
y
)dx
,
1 2
m

1im
2
y1 ,y2 ,ym

 Z



i (yi )

(x;
y
,
y
,

,
y
)dx
< 0.5
if
sup
min

1 2
m

1im

2
y1 ,y2 , ,ym
B

 Z


i (yi )

1 sup
min

(x; y1 , y2 , , ym )dx ,

1im
2

y1 ,y2 ,ym
Bc

 Z



i (yi )

(x; y1 , y2 , , ym )dx 0.5.


if
sup
min
1im
2
y1 ,y2 , ,ym
B
Model V
Suppose a fuzzy variable has a normal membership function with unknown
expected value e and variance . If e and are provided as random variables,
then is a hybrid variable. More generally, suppose that has a membership
function
(x; 1 , 2 , , m ), x <
(3.24)
in which the parameters 1 , 2 , , m are random variables rather than deterministic numbers. Then is a hybrid variable if (x; y1 , y2 , , ym ) is a
membership function for any (y1 , y2 , , ym ) that (1 , 2 , , m ) may take.
When are two hybrid variables equal to each other?
Definition 3.5 Let 1 and 2 be hybrid variables defined on the chance space
(, P, Cr) (, A, Pr). We say 1 = 2 if 1 (, ) = 2 (, ) for almost all
(, ) .

142

Chapter 3 - Chance Theory

Hybrid Vectors
Definition 3.6 An n-dimensional hybrid vector is a measurable function
from a chance space (, P, Cr) (, A, Pr) to the set of n-dimensional real
vectors, i.e., for any Borel set B of <n , the set



{ B} = (, ) (, ) B
(3.25)
is an event.
Theorem 3.11 The vector (1 , 2 , , n ) is a hybrid vector if and only if
1 , 2 , , n are hybrid variables.
Proof: Write = (1 , 2 , , n ). Suppose that is a hybrid vector on
the chance space (, P, Cr) (, A, Pr). For any Borel set B of <, the set
B <n1 is a Borel set of <n . Thus the set



(, ) 1 (, ) B



= (, ) 1 (, ) B, 2 (, ) <, , n (, ) <



= (, ) (, ) B <n1
is an event. Hence 1 is a hybrid variable. A similar process may prove that
2 , 3 , , n are hybrid variables.
Conversely, suppose that all 1 , 2 , , n are hybrid variables on the
chance space (, P, Cr) (, A, Pr). We define



B = B <n {(, ) |(, ) B} is an event .
The vector = (1 , 2 , , n ) is proved to be a hybrid vector if we can
prove that B contains all Borel sets of <n . First, the class B contains all
open intervals of <n because
)
(
n
n
\
Y





(, ) i (, ) (ai , bi )
(ai , bi ) =
(, ) (, )
i=1

i=1

is an event. Next, the class B is a -algebra of <n because (i) we have <n B
since {(, )|(, ) <n } = ; (ii) if B B, then



(, ) (, ) B
is an event, and


{(, ) (, ) B c } = {(, ) (, ) B}c

is an event. This means that B c B; (iii) if Bi B for i = 1, 2, , then


{(, ) |(, ) Bi } are events and
(
)

[
[



(, ) (, )
Bi =
{(, ) (, ) Bi }
i=1

i=1

is an event. This means that i Bi B. Since the smallest -algebra containing all open intervals of <n is just the Borel algebra of <n , the class B
contains all Borel sets of <n . The theorem is proved.

143

Section 3.3 - Chance Distribution

Hybrid Arithmetic
Definition 3.7 Let f : <n < be a measurable function, and 1 , 2 , , n
hybrid variables on the chance space (, P, Cr) (, A, Pr). Then =
f (1 , 2 , , n ) is a hybrid variable defined as
(, ) = f (1 (, ), 2 (, ), , n (, )),

(, ) .

(3.26)

Example 3.8: Let 1 and 2 be two hybrid variables. Then the sum =
1 + 2 is a hybrid variable defined by
(, ) = 1 (, ) + 2 (, ),

(, ) .

The product = 1 2 is also a hybrid variable defined by


(, ) = 1 (, ) 2 (, ),

(, ) .

Theorem 3.12 Let be an n-dimensional hybrid vector, and f : <n < a


measurable function. Then f () is a hybrid variable.
Proof: Assume that is a hybrid vector on the chance space (, P, Cr)
(, A, Pr). For any Borel set B of <, since f is a measurable function, the
f 1 (B) is a Borel set of <n . Thus the set





(, ) f ((, )) B = (, ) (, ) f 1 (B)
is an event for any Borel set B. Hence f () is a hybrid variable.

3.3

Chance Distribution

Chance distribution has been defined in several ways. Yang and Liu [240]
presented the concept of chance distribution of fuzzy random variables, and
Zhu and Liu [261] proposed the chance distribution of random fuzzy variables.
Li and Liu [103] gave the following definition of chance distribution of hybrid
variables.
Definition 3.8 The chance distribution : < [0, 1] of a hybrid variable
is defined by



(x) = Ch (, ) (, ) x .
(3.27)
Example 3.9: Let be a random variable on a probability space (, A, Pr).
It is clear that may be regarded as a hybrid variable on the chance space
(, P, Cr) (, A, Pr) as follows,
(, ) = (),

(, ) .

144

Chapter 3 - Chance Theory

Thus its chance distribution is





(x) = Ch (, ) (, ) x



= Ch { () x}



= Cr{} Pr () x
= Pr{ x}
which is just the probability distribution of the random variable .
Example 3.10: Let a
be a fuzzy variable on a credibility space (, P, Cr).
It is clear that a
may be regarded as a hybrid variable on the chance space
(, P, Cr) (, A, Pr) as follows,
(, ) = a
(),

(, ) .

Thus its chance distribution is





(x) = Ch (, ) (, ) x



= Ch { a
() x}



= Cr a
() x Pr{}
= Cr{
a x}
which is just the credibility distribution of the fuzzy variable a
.
Theorem 3.13 (Sufficient and Necessary Condition for Chance Distribution) A function : < [0, 1] is a chance distribution if and only if it is an
increasing function with
lim (x) 0.5 lim (x),

x+

lim (y) = (x) if lim (y) > 0.5 or (x) 0.5.


yx

yx

(3.28)
(3.29)

Proof: It is obvious that a chance distribution is an increasing function.


The inequalities (3.28) follow from the chance asymptotic theorem immediately. Assume that x is a point at which limyx (y) > 0.5. That is,
lim Ch{ y} > 0.5.
yx

Since { y} { x} as y x, it follows from the chance semicontinuity


law that
(y) = Ch{ y} Ch{ x} = (x)
as y x. When x is a point at which (x) 0.5, if limyx (y) 6= (x), then
we have
lim (y) > (x) 0.5.
yx

145

Section 3.4 - Expected Value

For this case, we have proved that limyx (y) = (x). Thus (3.29) is proved.
Conversely, suppose : < [0, 1] is an increasing function satisfying
(3.28) and (3.29). Theorem 2.19 states that there is a fuzzy variable whose
credibility distribution is just (x). Since a fuzzy variable is a special hybrid
variable, the theorem is proved.
Definition 3.9 The chance density function : < [0, +) of a hybrid
variable is a function such that
Z x
(x) =
(y)dy, x <,
(3.30)

(y)dy = 1

(3.31)

where is the chance distribution of .


Theorem 3.14 Let be a hybrid variable whose chance density function
exists. Then we have
Z x
Z +
Ch{ x} =
(y)dy, Ch{ x} =
(y)dy.
(3.32)

Proof: The first part follows immediately from the definition. In addition,
by the self-duality of chance measure, we have
Z +
Z x
Z +
Ch{ x} = 1 Ch{ < x} =
(y)dy
(y)dy =
(y)dy.

The theorem is proved.


Joint Chance Distribution
Definition 3.10 Let (1 , 2 , , n ) be a hybrid vector. Then the joint chance
distribution : <n [0, 1] is defined by
(x1 , x2 , , xn ) = Ch {1 x1 , 2 x2 , , n xn } .
Definition 3.11 The joint chance density function : <n [0, +) of a
hybrid vector (1 , 2 , , n ) is a function such that
Z x1 Z x2
Z xn
(x1 , x2 , , xn ) =

(y1 , y2 , , yn )dy1 dy2 dyn

holds for all (x1 , x2 , , xn ) < , and


Z + Z +
Z +

(y1 , y2 , , yn )dy1 dy2 dyn = 1

where is the joint chance distribution of the hybrid vector (1 , 2 , , n ).

146

3.4

Chapter 3 - Chance Theory

Expected Value

Expected value has been defined in several ways. For example, Kwakernaak
[87], Puri and Ralescu [193], Kruse and Meyer [86], and Liu and Liu [140]
gave different expected value operators of fuzzy random variables. Liu and
Liu [141] presented an expected value operator of random fuzzy variables. Li
and Liu [103]) suggested the following definition of expected value operator
of hybrid variables.
Definition 3.12 Let be a hybrid variable. Then the expected value of is
defined by
Z +
Z 0
E[] =
Ch{ r}dr
Ch{ r}dr
(3.33)

provided that at least one of the two integrals is finite.


Example 3.11: If a hybrid variable degenerates to a random variable ,
then
Ch{ x} = Pr{ x},

Ch{ x} = Pr{ x},

x <.

It follows from (3.33) that E[] = E[]. In other words, the expected value
operator of hybrid variable coincides with that of random variable.
Example 3.12: If a hybrid variable degenerates to a fuzzy variable a
, then
Ch{ x} = Cr{
a x},

Ch{ x} = Cr{
a x},

x <.

It follows from (3.33) that E[] = E[


a]. In other words, the expected value
operator of hybrid variable coincides with that of fuzzy variable.
Example 3.13: Let a
be a fuzzy variable and a random variable with
finite expected values. Then the hybrid variable = a
+ has expected value
E[] = E[
a] + E[].
Theorem 3.15 Let be a hybrid variable whose chance density function
exists. If the Lebesgue integral
Z

x(x)dx

is finite, then we have


Z

E[] =

x(x)dx.

(3.34)

147

Section 3.4 - Expected Value

Proof: It follows from the definition of expected value operator and Fubini
Theorem that
Z 0
Z +
Ch{ r}dr
Ch{ r}dr
E[] =

Z

Z

Z

Z

(x)dr dx

=
+

Z
=

Z
x(x)dx +

x(x)dx


(x)dx dr

(x)dr dx


Z
(x)dx dr

x(x)dx.

The theorem is proved.


Theorem 3.16 Let be a hybrid variable with chance distribution . If
lim (x) = 0,

lim (x) = 1

x+

and the Lebesgue-Stieltjes integral


Z

xd(x)

is finite, then we have


Z

E[] =

xd(x).

(3.35)

Proof: Since the Lebesgue-Stieltjes integral


diately have
Z
lim

y+

Z
lim

y+

lim

xd(x) = 0.

It follows from


Z +
xd(x) y
lim (z) (y) = y (1 (y)) 0,
y

z+

xd(x)

lim

Z
xd(x) = 0,

Z
xd(x) =

xd(x) is finite, we imme-

Z
xd(x),

and

Z
xd(x) =

R +

for y > 0,

148

Chapter 3 - Chance Theory


xd(x) y (y) lim (z) = y(y) 0,
z

for y < 0

that
lim y (1 (y)) = 0,

y+

lim y(y) = 0.

Let 0 = x0 < x1 < x2 < < xn = y be a partition of [0, y]. Then we have
n1
X

xd(x)
0

i=0

and

n1
X

Z
xi ((xi+1 ) (xi ))

(1 (xi+1 ))(xi+1 xi )

Ch{ r}dr
0

i=0

as max{|xi+1 xi | : i = 0, 1, , n 1} 0. Since
n1
X

n1
X

xi ((xi+1 ) (xi ))

(1 (xi+1 )(xi+1 xi ) = y((y) 1) 0

i=0

i=0

as y +. This fact implies that


Z +
Z
Ch{ r}dr =
0

xd(x).

A similar way may prove that


Z 0
Z

Ch{ r}dr =

xd(x).

It follows that the equation (3.35) holds.


Theorem 3.17 Let be a hybrid variable with finite expected values. Then
for any real numbers a and b, we have
E[a + b] = aE[] + b.

(3.36)

Proof: Step 1: We first prove that E[ + b] = E[] + b for any real number
b. If b 0, we have
Z +
Z 0
E[ + b] =
Ch{ + b r}dr
Ch{ + b r}dr

Z
Ch{ r b}dr

Ch{ r b}dr

(Ch{ r b} + Ch{ < r b})dr

= E[] +
0

= E[] + b.

149

Section 3.5 - Variance

If b < 0, then we have


Z 0
E[a + b] = E[]
(Ch{ r b} + Ch{ < r b})dr = E[] + b.
b

Step 2: We prove E[a] = aE[]. If a = 0, then the equation E[a] =


aE[] holds trivially. If a > 0, we have
+

Ch{a r}dr

E[a] =

Ch{a r}dr

Z 0

0
+

Ch{ r/a}dr

Ch{ r/a}dr

Z 0

Ch{ t}dt a

=a

Ch{ t}dt

= aE[].
If a < 0, we have
+

Ch{a r}dr

E[a] =
+

Ch{a r}dr

Z 0

Ch{ r/a}dr

=
Z

Ch{ r/a}dr

Z 0

0
+

Ch{ t}dt a

=a
0

Ch{ t}dt

= aE[].
Step 2: For any real numbers a and b, it follows from Steps 1 and 2 that
E[a + b] = E[a] + b = aE[] + b.
The theorem is proved.

3.5

Variance

The variance has been given by different ways. Liu and Liu [140][141] proposed the variance definitions of fuzzy random variables and random fuzzy
variables. Li and Liu [103] suggested the following variance definition of
hybrid variables.
Definition 3.13 Let be a hybrid variable with finite expected value e. Then
the variance of is defined by V [] = E[( e)2 ].
Theorem 3.18 If is a hybrid variable with finite expected value, a and b
are real numbers, then V [a + b] = a2 V [].

150

Chapter 3 - Chance Theory

Proof: It follows from the definition of variance that




V [a + b] = E (a + b aE[] b)2 = a2 E[( E[])2 ] = a2 V [].
Theorem 3.19 Let be a hybrid variable with expected value e. Then V [] =
0 if and only if Ch{ = e} = 1.
Proof: If V [] = 0, then E[( e)2 ] = 0. Note that
Z +
2
E[( e) ] =
Ch{( e)2 r}dr
0
2

which implies Ch{( e) r} = 0 for any r > 0. Hence we have


Ch{( e)2 = 0} = 1.
That is, Ch{ = e} = 1.
Conversely, if Ch{ = e} = 1, then we have Ch{( e)2 = 0} = 1 and
Ch{( e)2 r} = 0 for any r > 0. Thus
Z +
V [] =
Ch{( e)2 r}dr = 0.
0

The theorem is proved.


Maximum Variance Theorem
Theorem 3.20 Let f be a convex function on [a, b], and a hybrid variable
that takes values in [a, b] and has expected value e. Then
E[f ()]

be
ea
f (a) +
f (b).
ba
ba

(3.37)

Proof: For each (, ) , we have a (, ) b and


(, ) =

b (, )
(, ) a
a+
b.
ba
ba

It follows from the convexity of f that


f ((, ))

b (, )
(, ) a
f (a) +
f (b).
ba
ba

Taking expected values on both sides, we obtain (3.37).


Theorem 3.21 (Maximum Variance Theorem) Let be a hybrid variable
that takes values in [a, b] and has expected value e. Then
V [] (e a)(b e).

(3.38)

Proof: It follows from Theorem 3.20 immediately by defining f (x) = (xe)2 .

151

Section 3.7 - Independence

3.6

Moments

Liu [129] defined the concepts of moments of both fuzzy random variables and
random fuzzy variables. Li and Liu [103] discussed the moments of hybrid
variables.
Definition 3.14 Let be a hybrid variable. Then for any positive integer k,
(a) the expected value E[ k ] is called the kth moment;
(b) the expected value E[||k ] is called the kth absolute moment;
(c) the expected value E[( E[])k ] is called the kth central moment;
(d) the expected value E[| E[]|k ] is called the kth absolute central moment.
Note that the first central moment is always 0, the first moment is just
the expected value, and the second central moment is just the variance.
Theorem 3.22 Let be a nonnegative hybrid variable, and k a positive number. Then the k-th moment
Z +
E[ k ] = k
rk1 Ch{ r}dr.
(3.39)
0

Proof: It follows from the nonnegativity of that


Z
Z
Z
k
k
k
E[ ] =
Ch{ x}dx =
Ch{ r}dr = k
0

rk1 Ch{ r}dr.

The theorem is proved.


Theorem 3.23 (Li and Liu [104]) Let be a hybrid variable that takes values in [a, b] and has expected value e. Then for any positive integer k, the
kth absolute moment and kth absolute central moment satisfy the following
inequalities,
be k ea k
|a| +
|b| ,
(3.40)
E[||k ]
ba
ba
be
ea
E[| e|k ]
(e a)k +
(b e)k .
(3.41)
ba
ba
Proof: It follows from Theorem 3.20 immediately by defining f (x) = |x|k
and f (x) = |x e|k .

3.7

Independence

This book uses the following definition of independence of hybrid variables.


Definition 3.15 (Liu [132]) The hybrid variables 1 , 2 , , n are said to
be independent if
" n
#
n
X
X
fi (i ) =
E[fi (i )]
(3.42)
E
i=1

i=1

152

Chapter 3 - Chance Theory

for any measurable functions f1 , f2 , , fn provided that the expected values


exist and are finite.
Theorem 3.24 Hybrid variables are independent if they are (independent or
not) random variables.
Proof: Suppose the hybrid variables are random variables 1 , 2 , , n .
Then f1 (1 ), f2 (2 ), , fn (n ) are also random variables for any measurable
functions f1 , f2 , , fn . It follows from the linearity of expected value operator of random variables that
E[f1 (1 ) + f2 (2 ) + + fn (n )] = E[f1 (1 )] + E[f2 (2 )] + + E[fn (n )].
Hence the hybrid variables are independent.
Theorem 3.25 Hybrid variables are independent if they are independent
fuzzy variables.
Proof: If the hybrid variables are independent fuzzy variables 1 , 2 , , n ,
then f1 (1 ), f2 (2 ), , fn (n ) are also independent fuzzy variables for any
measurable functions f1 , f2 , , fn . It follows from the linearity of expected
value operator of fuzzy variables that
E[f1 (1 ) + f2 (2 ) + + fn (n )] = E[f1 (1 )] + E[f2 (2 )] + + E[fn (n )].
Hence the hybrid variables are independent.
Theorem 3.26 Two hybrid variables are independent if one is a random
variable and another is a fuzzy variable.
Proof: Suppose that is a random variable and is a fuzzy variable. Then
f () is a random variable and g() is a fuzzy variable for any measurable
functions f and g. Thus
E[f () + g()] = E[f ()] + E[g()].
Hence and are independent hybrid variables.
Theorem 3.27 If and are independent hybrid variables with finite expected values, then we have
E[a + b] = aE[] + bE[]

(3.43)

for any real numbers a and b.


Proof: The theorem follows from the definition by defining f1 (x) = ax and
f2 (x) = bx.
Theorem 3.28 Suppose that 1 , 2 , , n are independent hybrid variables,
and f1 , f2 , , fn are measurable functions. Then f1 (1 ), f2 (2 ), , fn (n )
are independent hybrid variables.
Proof: The theorem follows from the definition because the compound of
measurable functions is also measurable.

Section 3.9 - Critical Values

3.8

153

Identical Distribution

This section introduces the concept of identical distribution of hybrid variables.


Definition 3.16 The hybrid variables and are identically distributed if
Ch{ B} = Ch{ B}

(3.44)

for any Borel set B of real numbers.


Theorem 3.29 Let and be identically distributed hybrid variables, and
f : < < a measurable function. Then f () and f () are identically distributed hybrid variables.
Proof: For any Borel set B of real numbers, we have
Ch{f () B} = Ch{ f 1 (B)} = Ch{ f 1 (B)} = Ch{f () B}.
Hence f () and f () are identically distributed hybrid variables.
Theorem 3.30 If and are identically distributed hybrid variables, then
they have the same chance distribution.
Proof: Since and are identically distributed hybrid variables, we have
Ch{ (, x]} = Ch{ (, x]} for any x. Thus and have the same
chance distribution.
Theorem 3.31 If and are identically distributed hybrid variables whose
chance density functions exist, then they have the same chance density function.
Proof: It follows from Theorem 3.30 immediately.

3.9

Critical Values

In order to rank fuzzy random variables, Liu [122] defined two critical values:
optimistic value and pessimistic value. Analogously, Liu [124] gave the concepts of critical values of random fuzzy variables. Li and Liu [103] presented
the following definition of critical values of hybrid variables.
Definition 3.17 Let be a hybrid variable, and (0, 1]. Then


sup () = sup r Ch { r}

(3.45)

is called the -optimistic value to , and




inf () = inf r Ch { r}

(3.46)

is called the -pessimistic value to .

154

Chapter 3 - Chance Theory

The hybrid variable reaches upwards of the -optimistic value sup (),
and is below the -pessimistic value inf () with chance .
Example 3.14: If a hybrid variable degenerates to a random variable ,
then
Ch{ x} = Pr{ x},

Ch{ x} = Pr{ x},

x <.

It follows from the definition of critical values that


sup () = sup (),

inf () = inf (),

(0, 1].

In other words, the critical values of hybrid variable coincide with that of
random variable.
Example 3.15: If a hybrid variable degenerates to a fuzzy variable a
, then
Ch{ x} = Cr{
a x},

Ch{ x} = Cr{
a x},

x <.

It follows from the definition of critical values that


sup () = a
sup (),

inf () = a
inf (),

(0, 1].

In other words, the critical values of hybrid variable coincide with that of
fuzzy variable.
Theorem 3.32 Let be a hybrid variable. If > 0.5, then we have
Ch{ inf ()} ,

Ch{ sup ()} .

(3.47)

Proof: It follows from the definition of -pessimistic value that there exists
a decreasing sequence {xi } such that Ch{ xi } and xi inf () as
i . Since { xi } { inf ()} and limi Ch{ xi } > 0.5, it
follows from the chance semicontinuity theorem that
Ch{ inf ()} = lim Ch{ xi } .
i

Similarly, there exists an increasing sequence {xi } such that Ch{


xi } and xi sup () as i . Since { xi } { sup ()}
and limi Ch{ xi } > 0.5, it follows from the chance semicontinuity
theorem that
Ch{ sup ()} = lim Ch{ xi } .
i

The theorem is proved.


Theorem 3.33 Let be a hybrid variable and a number between 0 and 1.
We have
(a) if c 0, then (c)sup () = csup () and (c)inf () = cinf ();
(b) if c < 0, then (c)sup () = cinf () and (c)inf () = csup ().

155

Section 3.10 - Entropy

Proof: (a) If c = 0, then the part (a) is obvious. In the case of c > 0, we
have

(c)sup () = sup{r Ch{c r} }
= c sup{r/c | Ch{ r/c} }
= csup ().
A similar way may prove (c)inf () = cinf (). In order to prove the part (b),
it suffices to prove that ()sup () = inf () and ()inf () = sup ().
In fact, we have

()sup () = sup{r Ch{ r} }
= inf{r | Ch{ r} }
= inf ().
Similarly, we may prove that ()inf () = sup (). The theorem is proved.
Theorem 3.34 Let be a hybrid variable. Then we have
(a) if > 0.5, then inf () sup ();
(b) if 0.5, then inf () sup ().

Proof: Part (a): Write ()


= (inf () + sup ())/2. If inf () < sup (),
then we have

1 Ch{ < ()}


+ Ch{ > ()}
+ > 1.
A contradiction proves inf () sup ().
Part (b): Assume that inf () > sup (). It follows from the definition of

inf () that Ch{ ()}


< . Similarly, it follows from the definition of

sup () that Ch{ ()}


< . Thus

1 Ch{ ()}
+ Ch{ ()}
< + 1.
A contradiction proves inf () sup (). The theorem is verified.
Theorem 3.35 Let be a hybrid variable. Then we have
(a) sup () is a decreasing and left-continuous function of ;
(b) inf () is an increasing and left-continuous function of .
Proof: (a) It is easy to prove that inf () is an increasing function of .
Next, we prove the left-continuity of inf () with respect to . Let {i } be
an arbitrary sequence of positive numbers such that i . Then {inf (i )}
is an increasing sequence. If the limitation is equal to inf (), then the leftcontinuity is proved. Otherwise, there exists a number z such that
lim inf (i ) < z < inf ().

Thus Ch{ z } i for each i. Letting i , we get Ch{ z } .


Hence z inf (). A contradiction proves the left-continuity of inf () with
respect to . The part (b) may be proved similarly.

156

3.10

Chapter 3 - Chance Theory

Entropy

This section provides a definition of entropy to characterize the uncertainty


of hybrid variables resulting from information deficiency.
Definition 3.18 (Li and Liu [103]) Suppose that is a discrete hybrid variable taking values in {x1 , x2 , }. Then its entropy is defined by
H[] =

S(Ch{ = xi })

(3.48)

i=1

where S(t) = t ln t (1 t) ln(1 t).


Example 3.16: Suppose that is a discrete hybrid variable taking values
in {x1 , x2 , }. If there exists some index k such that Ch{ = xk } = 1, and
0 otherwise, then its entropy H[] = 0.
Example 3.17: Suppose that is a simple hybrid variable taking values in
{x1 , x2 , , xn }. If Ch{ = xi } = 0.5 for all i = 1, 2, , n, then its entropy
H[] = n ln 2.
Theorem 3.36 Suppose that is a discrete hybrid variable taking values in
{x1 , x2 , }. Then
H[] 0
(3.49)
and equality holds if and only if is essentially a deterministic/crisp number.
Proof: The nonnegativity is clear. In addition, H[] = 0 if and only if
Ch{ = xi } = 0 or 1 for each i. That is, there exists one and only one
index k such that Ch{ = xk } = 1, i.e., is essentially a deterministic/crisp
number.
This theorem states that the entropy of a hybrid variable reaches its
minimum 0 when the uncertain variable degenerates to a deterministic/crisp
number. In this case, there is no uncertainty.
Theorem 3.37 Suppose that is a simple hybrid variable taking values in
{x1 , x2 , , xn }. Then
H[] n ln 2
(3.50)
and equality holds if and only if Ch{ = xi } = 0.5 for all i = 1, 2, , n.
Proof: Since the function S(t) reaches its maximum ln 2 at t = 0.5, we have
H[] =

n
X

S(Ch{ = xi }) n ln 2

i=1

and equality holds if and only if Ch{ = xi } = 0.5 for all i = 1, 2, , n.


This theorem states that the entropy of a hybrid variable reaches its
maximum when the hybrid variable is an equipossible one. In this case,
there is no preference among all the values that the hybrid variable will take.

157

Section 3.12 - Inequalities

3.11

Distance

Definition 3.19 (Li and Liu [103]) The distance between hybrid variables
and is defined as
d(, ) = E[| |].
(3.51)
Theorem 3.38 (Li and Liu [103]) Let , , be hybrid variables, and let
d(, ) be the distance. Then we have
(a) (Nonnegativity) d(, ) 0;
(b) (Identification) d(, ) = 0 if and only if = ;
(c) (Symmetry) d(, ) = d(, );
(d) (Triangle Inequality) d(, ) 2d(, ) + 2d(, ).
Proof: The parts (a), (b) and (c) follow immediately from the definition.
Now we prove the part (d). It follows from the chance subadditivity theorem
that
Z +
d(, ) =
Ch {| | r} dr
0

Ch {| | + | | r} dr
0

Ch {{| | r/2} {| | r/2}} dr


0

(Ch{| | r/2} + Ch{| | r/2}) dr


0

Ch{| | r/2}dr +

=
0

Ch{| | r/2}dr
0

= 2E[| |] + 2E[| |] = 2d(, ) + 2d(, ).

3.12

Inequalities

Yang and Liu [237] proved some important inequalities for fuzzy random
variables, and Zhu and Liu [262] presented several inequalities for random
fuzzy variables. Li and Liu [103] also verified the following inequalities for
hybrid variables.
Theorem 3.39 Let be a hybrid variable, and f a nonnegative function. If
f is even and increasing on [0, ), then for any given number t > 0, we have
Ch{|| t}

E[f ()]
.
f (t)

(3.52)

158

Chapter 3 - Chance Theory

Proof: It is clear that Ch{|| f 1 (r)} is a monotone decreasing function


of r on [0, ). It follows from the nonnegativity of f () that
Z

Ch{f () r}dr

E[f ()] =
0

Ch{|| f 1 (r)}dr

=
0

f (t)

Ch{|| f 1 (r)}dr

f (t)

dr Ch{|| f 1 (f (t))}

= f (t) Ch{|| t}
which proves the inequality.
Theorem 3.40 (Markov Inequality) Let be a hybrid variable. Then for
any given numbers t > 0 and p > 0, we have
Ch{|| t}

E[||p ]
.
tp

(3.53)

Proof: It is a special case of Theorem 3.39 when f (x) = |x|p .


Theorem 3.41 (Chebyshev Inequality) Let be a hybrid variable whose variance V [] exists. Then for any given number t > 0, we have
Ch {| E[]| t}

V []
.
t2

(3.54)

Proof: It is a special case of Theorem 3.39 when the hybrid variable is


replaced with E[], and f (x) = x2 .
Theorem 3.42 (H
olders Inequality) Let p and q be positive real numbers
with 1/p + 1/q = 1, and let and be independent hybrid variables with
E[||p ] < and E[||q ] < . Then we have
p
p
E[||] p E[||p ] q E[||q ].
(3.55)
Proof: The inequality holds trivially if at least one of and is zero a.s. Now
we assumeE[||p ] > 0 and E[||q ] > 0. It is easy to prove that the function

f (x, y) = p x q y is a concave function on D = {(x, y) : x 0, y 0}. Thus


for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real numbers
a and b such that
f (x, y) f (x0 , y0 ) a(x x0 ) + b(y y0 ),

(x, y) D.

159

Section 3.13 - Convergence Concepts

Letting x0 = E[||p ], y0 = E[||q ], x = ||p and y = ||q , we have


f (||p , ||q ) f (E[||p ], E[||q ]) a(||p E[||p ]) + b(||q E[||q ]).
Taking the expected values on both sides, we obtain
E[f (||p , ||q )] f (E[||p ], E[||q ]).
Hence the inequality (3.55) holds.
Theorem 3.43 (Minkowski Inequality) Let p be a real number with p
1, and let and be independent hybrid variables with E[||p ] < and
E[||p ] < . Then we have
p
p
p
p
E[| + |p ] p E[||p ] + p E[||p ].
(3.56)
Proof: The inequality holds trivially if at least one of and is zero a.s. Now
we assume
E[||p ] > 0 and E[||p ] > 0. It is easy to prove that the function

f (x, y) = ( p x + p y)p is a concave function on D = {(x, y) : x 0, y 0}.


Thus for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real
numbers a and b such that
f (x, y) f (x0 , y0 ) a(x x0 ) + b(y y0 ),

(x, y) D.

Letting x0 = E[||p ], y0 = E[||p ], x = ||p and y = ||p , we have


f (||p , ||p ) f (E[||p ], E[||p ]) a(||p E[||p ]) + b(||p E[||p ]).
Taking the expected values on both sides, we obtain
E[f (||p , ||p )] f (E[||p ], E[||p ]).
Hence the inequality (3.56) holds.
Theorem 3.44 (Jensens Inequality) Let be a hybrid variable, and f : <
< a convex function. If E[] and E[f ()] are finite, then
f (E[]) E[f ()].

(3.57)

Especially, when f (x) = |x|p and p 1, we have |E[]|p E[||p ].


Proof: Since f is a convex function, for each y, there exists a number k such
that f (x) f (y) k (x y). Replacing x with and y with E[], we obtain
f () f (E[]) k ( E[]).
Taking the expected values on both sides, we have
E[f ()] f (E[]) k (E[] E[]) = 0
which proves the inequality.

160

3.13

Chapter 3 - Chance Theory

Convergence Concepts

Liu [129] gave the convergence concepts of fuzzy random sequence, and Zhu
and Liu [264] introduced the convergence concepts of random fuzzy sequence.
Li and Liu [103] discussed the convergence concepts of hybrid sequence: convergence almost surely (a.s.), convergence in chance, convergence in mean,
and convergence in distribution.
Table 3.1: Relationship among Convergence Concepts
Convergence
in Mean

Convergence

in Chance

Convergence
in Distribution

Definition 3.20 Suppose that , 1 , 2 , are hybrid variables defined on


the chance space (, P, Cr) (, A, Pr). The sequence {i } is said to be
convergent a.s. to if there exists an event with Ch{} = 1 such that
lim |i (, ) (, )| = 0

(3.58)

for every (, ) . In that case we write i , a.s.


Definition 3.21 Suppose that , 1 , 2 , are hybrid variables. We say that
the sequence {i } converges in chance to if
lim Ch {|i | } = 0

(3.59)

for every > 0.


Definition 3.22 Suppose that , 1 , 2 , are hybrid variables with finite
expected values. We say that the sequence {i } converges in mean to if
lim E[|i |] = 0.

(3.60)

In addition, the sequence {i } is said to converge in mean square to if


lim E[|i |2 ] = 0.

(3.61)

Definition 3.23 Suppose that , 1 , 2 , are the chance distributions of


hybrid variables , 1 , 2 , , respectively. We say that {i } converges in distribution to if i at any continuity point of .

161

Section 3.13 - Convergence Concepts

Convergence Almost Surely vs. Convergence in Chance


Example 3.18: Convergence a.s. does not imply convergence in chance.
Take a credibility space (, P, Cr) to be {1 , 2 , } with Cr{j } = j/(2j +1)
for j = 1, 2, and take an arbitrary probability space (, A, Pr). Then we
define hybrid variables as
(
i, if j = i
i (j , ) =
0, otherwise
for i = 1, 2, and 0. The sequence {i } convergence a.s. to . However,
for some small number > 0, we have
Ch{|i | } = Cr{|i | } =

i
1
.
2i + 1
2

That is, the sequence {i } does not converge in chance to .


Example 3.19: Convergence in chance does not imply convergence a.s.
Take an arbitrary credibility space (, P, Cr) and take a probability space
(, A, Pr) to be [0, 1] with Borel algebra and Lebesgue measure. For any
positive integer i, there is an integer j such that i = 2j + k, where k is an
integer between 0 and 2j 1. Then we define hybrid variables as
(
i, if k/2j (k + 1)/2j
i (, ) =
0, otherwise
for i = 1, 2, and 0. For some small number > 0, we have
Ch{|i | } = Pr{|i | } =

1
0
2i

as i . That is, the sequence {i } converges in chance to . However, for


any [0, 1], there is an infinite number of intervals of the form [k/2j , (k +
1)/2j ] containing . Thus i (, ) does converges to 0. In other words, the
sequence {i } does not converge a.s. to .
Convergence in Mean vs. Convergence in Chance
Theorem 3.45 Suppose that , 1 , 2 , are hybrid variables. If {i } converges in mean to , then {i } converges in chance to .
Proof: It follows from the Markov inequality that for any given number
> 0, we have
E[|i |]
Ch{|i | }
0

as i . Thus {i } converges in chance to . The theorem is proved.

162

Chapter 3 - Chance Theory

Example 3.20: Convergence in chance does not imply convergence in mean.


Take a credibility space (, P, Cr) to be {1 , 2 , } with Cr{1 } = 1/2
and Cr{j } = 1/j for j = 2, 3, and take an arbitrary probability space
(, A, Pr). The hybrid variables are defined by
(
i, if j = i
i (j , ) =
0, otherwise
for i = 1, 2, and 0. For some small number > 0, we have
1
0.
i
That is, the sequence {i } converges in chance to . However,
Ch{|i | } = Cr{|i | } =

E[|i |] = 1,

i.

That is, the sequence {i } does not converge in mean to .


Convergence Almost Surely vs. Convergence in Mean
Example 3.21: Convergence a.s. does not imply convergence in mean.
Take an arbitrary credibility space (, P, Cr) and take a probability space
(, A, Pr) to be {1 , 2 , } with Pr{j } = 1/2j for j = 1, 2, The hybrid
variables are defined by
(
2i , if j = i
i (, j ) =
0, otherwise
for i = 1, 2, and 0. Then i converges a.s. to . However, the sequence
{i } does not converges in mean to because E[|i |] 1.
Example 3.22: Convergence in chance does not imply convergence a.s.
Take an arbitrary credibility space (, P, Cr) and take a probability space
(, A, Pr) to be [0, 1] with Borel algebra and Lebesgue measure. For any
positive integer i, there is an integer j such that i = 2j + k, where k is an
integer between 0 and 2j 1. The hybrid variables are defined by
(
i, if k/2j (k + 1)/2j
i (, ) =
0, otherwise
for i = 1, 2, and 0. Then
1
0.
2j
That is, the sequence {i } converges in mean to . However, for any [0, 1],
there is an infinite number of intervals of the form [k/2j , (k+1)/2j ] containing
. Thus i (, ) does converges to 0. In other words, the sequence {i } does
not converge a.s. to .
E[|i |] =

Section 3.13 - Convergence Concepts

163

Convergence in Chance vs. Convergence in Distribution


Theorem 3.46 Suppose , 1 , 2 , are hybrid variables. If {i } converges
in chance to , then {i } converges in distribution to .
Proof: Let x be a given continuity point of the distribution . On the one
hand, for any y > x, we have
{i x} = {i x, y} {i x, > y} { y} {|i | y x}.
It follows from the chance subadditivity theorem that
i (x) (y) + Ch{|i | y x}.
Since {i } converges in chance to , we have Ch{|i | y x} 0 as
i . Thus we obtain lim supi i (x) (y) for any y > x. Letting
y x, we get
lim sup i (x) (x).
(3.62)
i

On the other hand, for any z < x, we have


{ z} = {i x, z} {i > x, z} {i x} {|i | x z}
which implies that
(z) i (x) + Ch{|i | x z}.
Since Ch{|i | x z} 0, we obtain (z) lim inf i i (x) for any
z < x. Letting z x, we get
(x) lim inf i (x).
i

(3.63)

It follows from (3.62) and (3.63) that i (x) (x). The theorem is proved.
Example 3.23: Convergence in distribution does not imply convergence
in chance. Take a credibility space (, P, Cr) to be {1 , 2 } with Cr{1 } =
Cr{2 } = 1/2 and take an arbitrary probability space (, A, Pr). We define
a hybrid variables as
(
1, if = 1
(, ) =
1, if = 2 .
We also define i = for i = 1, 2, . Then i and have the same chance
distribution. Thus {i } converges in distribution to . However, for some
small number > 0, we have
Ch{|i | } = Cr{|i | } = 1.
That is, the sequence {i } does not converge in chance to .

164

Chapter 3 - Chance Theory

Convergence Almost Surely vs. Convergence in Distribution


Example 3.24: Convergence in distribution does not imply convergence
a.s. Take a credibility space to be (, P, Cr) to be {1 , 2 } with Cr{1 } =
Cr{2 } = 1/2 and take an arbitrary probability space (, A, Pr). We define
a hybrid variable as
(
1, if = 1
(, ) =
1, if = 2 .
We also define i = for i = 1, 2, . Then i and have the same chance
distribution. Thus {i } converges in distribution to . However, the sequence
{i } does not converge a.s. to .
Example 3.25: Convergence a.s. does not imply convergence in chance.
Take a credibility space (, P, Cr) to be {1 , 2 , } with Cr{j } = j/(2j +1)
for j = 1, 2, and take an arbitrary probability space (, A, Pr). The hybrid
variables are defined by
(
i, if j = i
i (j , ) =
0, otherwise
for i = 1, 2, and 0. Then the sequence {i } converges a.s. to .
However, the chance distributions of i are

0,
if x < 0

(i + 1)/(2i + 1), if 0 x < i


i (x) =

1,
if x i
for i = 1, 2, , respectively. The chance distribution of is

0, if x < 0
(x) =
1, if x 0.
It is clear that i (x) does not converge to (x) at x > 0. That is, the
sequence {i } does not converge in distribution to .

3.14

Conditional Chance

We consider the chance measure of an event A after it has been learned that
some other event B has occurred. This new chance measure of A is called
the conditional chance measure of A given B.
In order to define a conditional chance measure Ch{A|B}, at first we
have to enlarge Ch{A B} because Ch{A B} < 1 for all events whenever
Ch{B} < 1. It seems that we have no alternative but to divide Ch{AB} by

165

Section 3.14 - Conditional Chance

Ch{B}. Unfortunately, Ch{A B}/Ch{B} is not always a chance measure.


However, the value Ch{A|B} should not be greater than Ch{A B}/Ch{B}
(otherwise the normality will be lost), i.e.,
Ch{A|B}

Ch{A B}
.
Ch{B}

(3.64)

On the other hand, in order to preserve the self-duality, we should have


Ch{A|B} = 1 Ch{Ac |B} 1

Ch{Ac B}
.
Ch{B}

(3.65)

Furthermore, since (A B) (Ac B) = B, we have Ch{B} Ch{A B} +


Ch{Ac B} by using the chance subadditivity theorem. Thus
01

Ch{A B}
Ch{Ac B}

1.
Ch{B}
Ch{B}

(3.66)

Hence any numbers between 1 Ch{Ac B}/Ch{B} and Ch{A B}/Ch{B}


are reasonable values that the conditional chance may take. Based on the
maximum uncertainty principle, we have the following conditional chance
measure.
Definition 3.24 (Li and Liu [106]) Let (, P, Cr) (, A, Pr) be a chance
space and A, B two events. Then the conditional chance measure of A given
B is defined by

Ch{A|B} =

Ch{A B}
,
Ch{B}

if

Ch{A B}
< 0.5
Ch{B}

Ch{Ac B}
Ch{Ac B}
, if
< 0.5
Ch{B}
Ch{B}
0.5,

(3.67)

otherwise

provided that Ch{B} > 0.

Remark 3.6: It follows immediately from the definition of conditional


chance that
1

Ch{A B}
Ch{Ac B}
Ch{A|B}
.
Ch{B}
Ch{B}

(3.68)

Furthermore, it is clear that the conditional chance measure obeys the maximum uncertainty principle.

166

Chapter 3 - Chance Theory

Remark 3.7: Let X and Y be events in the credibility space. Then the
conditional chance measure of X given Y is

Ch{X |Y } =

Cr{X Y }
,
Cr{Y }

if

Cr{X Y }
< 0.5
Cr{Y }

Cr{X c Y }
Cr{X c Y }
, if
< 0.5
Cr{Y }
Cr{Y }
0.5,

otherwise

which is just the conditional credibility of X given Y .


Remark 3.8: Let X and Y be events in the probability space. Then the
conditional chance measure of X given Y is
Pr{X Y }
Pr{Y }

Ch{ X| Y } =

which is just the conditional probability of X given Y .


Example 3.26: Let and be two hybrid variables. Then we have

Ch{ = x, = y}
Ch{ = x, = y}

,
if
< 0.5

Ch{ = y}
Ch{ = y}

Ch{ 6= x, = y}
Ch{ 6= x, = y}
Ch { = x| = y} =
, if
< 0.5
1

Ch{
=
y}
Ch{ = y}

0.5,
otherwise
provided that Ch{ = y} > 0.
Theorem 3.47 (Li and Liu [106]) Conditional chance measure is a type of
uncertain measure. That is, conditional chance measure is normal, increasing, self-dual and countably subadditive.
Proof: At first, the conditional chance measure Ch{|B} is normal, i.e.,
Ch{ |B} = 1

Ch{}
= 1.
Ch{B}

For any events A1 and A2 with A1 A2 , if


Ch{A2 B}
Ch{A1 B}

< 0.5,
Ch{B}
Ch{B}
then
Ch{A1 |B} =

Ch{A1 B}
Ch{A2 B}

= Ch{A2 |B}.
Ch{B}
Ch{B}

167

Section 3.14 - Conditional Chance

If

Ch{A1 B}
Ch{A2 B}
0.5
,
Ch{B}
Ch{B}

then Ch{A1 |B} 0.5 Ch{A2 |B}. If


0.5 <

Ch{A1 B}
Ch{A2 B}

,
Ch{B}
Ch{B}

then we have




Ch{Ac1 B}
Ch{Ac2 B}
Ch{A1 |B} = 1
0.5 1
0.5 = Ch{A2 |B}.
Ch{B}
Ch{B}
This means that Ch{|B} is increasing. For any event A, if
Ch{A B}
0.5,
Ch{B}

Ch{Ac B}
0.5,
Ch{B}

then we have Ch{A|B} + Ch{Ac |B} = 0.5 + 0.5 = 1 immediately. Otherwise,


without loss of generality, suppose
Ch{Ac B}
Ch{A B}
< 0.5 <
,
Ch{B}
Ch{B}
then we have
Ch{A|B} + Ch{Ac |B} =



Ch{A B}
Ch{A B}
+ 1
= 1.
Ch{B}
Ch{B}

That is, Ch{|B} is self-dual. Finally, for any countable sequence {Ai } of
events, if Ch{Ai |B} < 0.5 for all i, it follows from the countable subadditivity
of chance measure that
)
(

[
X
(
) Ch
Ai B
Ch{Ai B}

[
X
i=1
i=1
Ch

=
Ai B
Ch{Ai |B}.
Ch{B}
Ch{B}
i=1
i=1
Suppose there is one term greater than 0.5, say
Ch{A1 |B} 0.5,

Ch{Ai |B} < 0.5,

i = 2, 3,

If Ch{i Ai |B} = 0.5, then we immediately have


(
)

[
X
Ch
Ai B
Ch{Ai |B}.
i=1

i=1

If Ch{i Ai |B} > 0.5, we may prove the above inequality by the following
facts:
!

[
\
c
c
A1 B
(Ai B)
Ai B ,
i=2

i=1

168

Chapter 3 - Chance Theory

Ch{Ac1

B}

Ch{Ai B} + Ch

(
\

i=2

Ch

(
[

)
Aci

i=1

Ch

)
Ai |B

=1

i=1

(
\

)
Aci

i=1

Ch{B}

Ch{Ac1 B}
+
Ch{Ai |B} 1
Ch{B}
i=1

Ch{Ai B}

i=2

Ch{B}

If there are at least two terms greater than 0.5, then the countable subadditivity is clearly true. Thus Ch{|B} is countably subadditive. Hence Ch{|B}
is an uncertain measure.
Definition 3.25 (Li and Liu [106]) The conditional chance distribution :
< [0, 1] of a hybrid variable given B is defined by
(x|B) = Ch { x|B}

(3.69)

provided that Ch{B} > 0.


Example 3.27: Let and be hybrid variables. Then the conditional
chance distribution of given = y is

(x| = y) =

Ch{ x, = y}
,
Ch{ = y}

if

Ch{ x, = y}
< 0.5
Ch{ = y}

Ch{ > x, = y}
Ch{ > x, = y}
, if
< 0.5
Ch{ = y}
Ch{ = y}
0.5,

otherwise

provided that Ch{ = y} > 0.


Definition 3.26 (Li and Liu [106]) The conditional chance density function
of a hybrid variable given B is a nonnegative function such that
Z x
(x|B) =
(y|B)dy, x <,
(3.70)

(y|B)dy = 1

where (x|B) is the conditional chance distribution of given B.

(3.71)

169

Section 3.15 - Hybrid Process

Definition 3.27 (Li and Liu [106]) Let be a hybrid variable. Then the
conditional expected value of given B is defined by
Z +
Z 0
E[|B] =
Ch{ r|B}dr
Ch{ r|B}dr
(3.72)

provided that at least one of the two integrals is finite.


Following conditional chance and conditional expected value, we also have
conditional variance, conditional moments, conditional critical values, conditional entropy as well as conditional convergence.

3.15

Hybrid Process

Definition 3.28 (Liu [133]) Let T be an index set, and (, P, Cr)(, A, Pr)
a chance space. A hybrid process is a measurable function from T (, P, Cr)
(, A, Pr) to the set of real numbers, i.e., for each t T and any Borel set
B of real numbers, the set

(3.73)
{(, ) X(t, , ) B}
is an event.
That is, a hybrid process X( , ) is a function of three variables such
that the function Xt (, ) is a hybrid variable for each t . For each fixed
( , ), the function Xt ( , ) is called a sample path of the hybrid process.
A hybrid process Xt (, ) is said to be sample-continuous if the sample path
is continuous for almost all (, ).
Definition 3.29 (Liu [133]) A hybrid process Xt is said to have independent
increments if
Xt1 Xt0 , Xt2 Xt1 , , Xtk Xtk1
(3.74)
are independent hybrid variables for any times t0 < t1 < < tk . A hybrid
process Xt is said to have stationary increments if, for any given t > 0, the
Xs+t Xs are identically distributed hybrid variables for all s > 0.
Example 3.28: Let Xt be a fuzzy process and let Yt be a stochastic process.
Then Xt + Yt is a hybrid process.
Hybrid Renewal Process
Definition 3.30 (Liu [133]) Let 1 , 2 , be iid positive hybrid variables.
Define S0 = 0 and Sn = 1 + 2 + + n for n 1. Then the hybrid process


Nt = max n Sn t
(3.75)
n0

is called a hybrid renewal process.

170

Chapter 3 - Chance Theory

If 1 , 2 , denote the interarrival times of successive events. Then Sn


can be regarded as the waiting time until the occurrence of the nth event,
and Nt is the number of renewals in (0, t]. Each sample path of Nt is a
right-continuous and increasing step function taking only nonnegative integer
values. Furthermore, the size of each jump of Nt is always 1. In other words,
Nt has at most one renewal at each time. In particular, Nt does not jump at
time 0. Since Nt n if and only if Sn t, we have
Ch{Nt n} = Ch{Sn t}.

(3.76)

Theorem 3.48 (Liu [133]) Let Nt be a hybrid renewal process. Then we


have

X
E[Nt ] =
Ch{Sn t}.
(3.77)
n=1

Proof: Since Nt takes only nonnegative integer values, we have


Z
Z n
X
E[Nt ] =
Ch{Nt r}dr =
Ch{Nt r}dr
0

n=1

Ch{Nt n} =

n=1

n1

Ch{Sn t}.

n=1

The theorem is proved.


D Process
Definition 3.31 (Liu [133]) Let Bt be a Brownian motion, and let Ct be a
C process. Then
Dt = (Bt , Ct )
(3.78)
is called a D process. The D process is said to be standard if both Bt and Ct
are standard.
Definition 3.32 (Liu [133]) Let Bt be a standard Brownian motion, and let
Ct be a standard C process. Then the hybrid process
Dt = et + 1 Bt + 2 Ct

(3.79)

is called a scalar D process. The parameter e is called the drift coefficient, 1


is called the random diffusion coefficient, and 2 is called the fuzzy diffusion
coefficient.
Definition 3.33 (Liu [133]) Let Bt be a standard Brownian motion, and let
Ct be a standard C process. Then the hybrid process
Gt = exp(et + 1 Bt + 2 Ct )
is called a geometric D process.

(3.80)

171

Section 3.16 - Hybrid Calculus

C. t

...
..........
..
...
.....
......
....
............ ...
.. ..
.....
..
.. .....
.. ......
.....
.
.
.
.
.
.
...
.
.
.
.
. ....
........................
......... ........
...
...
...
..... ...........
.......
...
...
......
.... ......... .............. ........... .
...
........
...
..... ....
... ........
........
.
.
...
.
.
............ .....
.
.
.
...
.
.
.
.
..........
......
.........
...
.
.
.
.
.
.
.
.
.
...
.
.
.
..... ... ...
.... ...
.
.
.
.
.
.
.....
.
.
.
...
.
.
.
.
.
.
.
.
.........................
. ...
......
..... .... ...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... .....
.
.
.
.
... ...
.....
....
........................
.......
.
.
.
.
.
.
.
.
.
... ......................................
.
................
.....
... ... ...
...
......... ..................
... ....... .......... ............... ............. ......
......
... ... .............
........
.
.
.
.
.
.
.
.
...... ........... ..... ......
.
.
.
.
.
.
.
...
.
.
...........
......... .... ..... ......
..... .. ..........................................
..
...
...
................................. .. ........................................................................................ ..............
...... ....
..
........... .
... .................................................
.
.......
..
..
.....
............
... .............
........................................
...
............. ..........
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
....................
........
......
...
.....
..
...
.......
.. ...
...
...
...
.........
...................... .....
.. ....................... ....
...
.
.
.
.
.
.
.
.
.
.... .......................
.
.
.
.
...
.
...................
..... .......
...
.... ........
......... ................
...
................... ..... ...........
....
.. ...... ...... ........
...
.
... .
.
.
.
.
.
.
.
.
.
.
...
.
.
....... ..... ................
...
..... ...
............
...
...... ....
...
...... ..
......
...
....................................................................................................................................................................................................................................................................
...

Bt

Figure 3.2: A Sample Path of Standard D Process


A Hybrid Stock Model
Liu [133] assumed that stock price follows geometric D process, and presented
a hybrid stock model in which the bond price Xt and the stock price Yt are
determined by
(
Xt = X0 exp(rt)
(3.81)
Yt = Y0 exp(et + 1 Bt + 2 Ct )
where r is the riskless interest rate, e is the stock drift, 1 is the random
stock diffusion, 2 is the fuzzy stock diffusion, Bt is a standard Brownian
motion, and Ct is a standard C process. For exploring further development
of hybrid stock models, the interested readers may consult Gao [51].

3.16

Hybrid Calculus

Let Dt be a standard D process, and dt an infinitesimal time interval. Then


dDt = Dt+dt Dt = (dBt , dCt )
is a hybrid process.
Definition 3.34 (Liu [133]) Let Xt = (Yt , Zt ) where Yt and Zt are scalar
hybrid processes, and let Dt = (Bt , Ct ) be a standard D process. For any
partition of closed interval [a, b] with a = t1 < t2 < < tk+1 = b, the mesh
is written as
= max |ti+1 ti |.
1ik

Then the hybrid integral of Xt with respect to Dt is


Z b
k
X

Xt dDt = lim
Yti (Bti+1 Bti ) + Zti (Cti+1 Cti )
a

i=1

(3.82)

172

Chapter 3 - Chance Theory

provided that the limit exists in mean square and is a hybrid variable.
Remark 3.9: The hybrid integral may also be written as follows,
Z b
Z b
Xt dDt =
(Yt dBt + Zt dCt ) .
a

(3.83)

Example 3.29: Let Bt be a standard Brownian motion, and Ct a standard


C process. Then
Z s
(1 dBt + 2 dCt ) = 1 Bs + 2 Cs
0

where 1 and 2 are constants, random variables, fuzzy variables, or hybrid


variables.
Example 3.30: Let Bt be a standard Brownian motion, and Ct a standard
C process. Then
Z s
1
(Bt dBt + Ct dCt ) = (Bs2 s + Cs2 ).
2
0
Example 3.31: Let Bt be a standard Brownian motion, and Ct a standard
C process. Then for any partition 0 = t1 < t2 < < tk+1 = s, write
Bi = Bti+1 Bti ,

Ci = Cti+1 Cti

and obtain
k
X

Bs Cs =

Bti+1 Cti+1 Bti Cti

i=1
k
X

Bti Ci +

i=1
Z s

k
X

Cti Bi +

i=1
Z s

Bt dCt +
0

k
X

Bi Ci

i=1

Ct dBt + 0
0

as 0. That is,
Z

(Ct dBt + Bt dCt ) = Bs Cs .


0

Theorem 3.49 (Liu [133]) Let Bt be a standard Brownian motion, Ct a


standard C process, and h(t, b, c) a twice continuously differentiable function.
Define Xt = h(t, Bt , Ct ). Then we have the following chain rule
dXt =

h
h
(t, Bt , Ct )dt +
(t, Bt , Ct )dBt
t
b
h
1 2h
+ (t, Bt , Ct )dCt +
(t, Bt , Ct )dt.
c
2 b2

(3.84)

Section 3.17 - Hybrid Differential Equation

173

Proof: Since the function h is twice continuously differentiable, by using


Taylor series expansion, the infinitesimal increment of Xt has a second-order
approximation
Xt =

h
h
h
(t, Bt , Ct )t +
(t, Bt , Ct )Bt +
(t, Bt , Ct )Ct
t
b
c

1 2h
1 2h
1 2h
(t, Bt , Ct )(t)2 +
(t, Bt , Ct )(Bt )2 +
(t, Bt , Ct )(Ct )2
2
2
2 t
2 b
2 c2

2h
2h
2h
(t, Bt , Ct )tBt +
(t, Bt , Ct )tCt +
(t, Bt , Ct )Bt Ct .
tb
tc
bc

Since we can ignore the terms (t)2 , (Ct )2 , tBt , tCt , Bt Ct and
replace (Bt )2 with t, the chain rule is obtained because it makes
Z s
Z s
Z s
Z
1 s 2h
h
h
h
dt +
dBt +
dCt +
dt
Xs = X0 +
2 0 b2
0 t
0 b
0 c
for any s 0.
Remark 3.10: The infinitesimal increments dBt and dCt in (3.84) may be
replaced with the derived D process
dYt = ut dt + v1t dBt + v2t dCt

(3.85)

where ut and v2t are absolutely integrable hybrid processes, and v1t is a
square integrable hybrid process, thus producing
dh(t, Yt ) =

h
h
1 2h
2
(t, Yt )dt +
(t, Yt )dYt +
(t, Yt )v1t
dt.
t
b
2 b2

(3.86)

Remark 3.11: Assume that B1t , B2t , , Bmt are standard Brownian motions, C1t , C2t , , Cnt are standard C processes, and
h(t, b1 , b2 , , bm , c1 , c2 , , cn )
is a twice continuously differentiable function. Define
Xt = h(t, B1t , B2t , , Bmt , C1t , C2t , , Cnt ).
You [244] proved the following chain rule
dXt =

m
n
m
X
X
h
h
h
1 X 2h
dt +
dBit +
dCjt +
dt.
t
bi
cj
2 i=1 b2i
i=1
j=1

(3.87)

Example 3.32: Applying the chain rule, we obtain the following formulas,
d(Bt Ct ) = Ct dBt + Bt dCt ,
d(tBt Ct ) = Bt Ct dt + tCt dBt + tBt dCt ,
2

d(t + Bt2 + Ct2 ) = 2tdt + 2Bt dBt + 2Ct dCt + dt.

174

3.17

Chapter 3 - Chance Theory

Hybrid Differential Equation

Hybrid differential equation is a type of differential equation driven by both


Brownian motion and C process.
Definition 3.35 (Liu [133]) Suppose Bt is a standard Brownian motion, Ct
is a standard C process, and f, g1 , g2 are some given functions. Then
dXt = f (t, Xt )dt + g1 (t, Xt )dBt + g2 (t, Xt )dCt

(3.88)

is called a hybrid differential equation. A solution is a hybrid process Xt that


satisfies (3.88) identically in t.
Remark 3.12: Note that there is no precise definition for the terms dXt , dt,
dBt and dCt in the hybrid differential equation (3.88). The mathematically
meaningful form is the hybrid integral equation
Z s
Z s
Z s
Xs = X0 +
f (t, Xt )dt +
g1 (t, Xt )dBt +
g2 (t, Xt )dCt .
(3.89)
0

However, the differential form is more convenient for us. This is the main
reason why we accept the differential form.
Example 3.33: Let Bt be a standard Brownian motion, and let a
and b be
two fuzzy variables. Then the hybrid differential equation
dXt = a
dt + bdBt
has a solution
Xt = a
t + bBt .
The hybrid differential equation
dXt = a
Xt dt + bXt dBt
has a solution
Xt = exp

b2
a

!
t + bBt

Example 3.34: Let Ct be a standard C process, and let and be two


random variables. Then the hybrid differential equation
dXt = dt + dCt
has a solution
Xt = t + Ct .

175

Section 3.17 - Hybrid Differential Equation

The hybrid differential equation


dXt = Xt dt + Xt dCt
has a solution
Xt = exp (t + Ct ) .
Example 3.35: Let Bt be a standard Brownian motion, and Ct a standard
C process. Then the hybrid differential equation
dXt = adt + bdBt + cdCt
has a solution
Xt = at + bBt + cCt
which is just a scalar D process.
Example 3.36: Let Bt be a standard Brownian motion, and Ct a standard
C process. Then the hybrid differential equation
dXt = aXt dt + bXt dBt + cXt dCt
has a solution


Xt = exp

which is just a geometric D process.

b2
2


t + bBt + cCt

Chapter 4

Uncertainty Theory
A classical measure is essentially a set function satisfying nonnegativity and

countable additivity axioms. Classical measure theory, developed by Emile


Borel and Henri Lebesgue around 1900, has been widely applied in both
theory and practice.
However, the additivity axiom of classical measure theory has been challenged by many mathematicians. The earliest challenge was from the theory
of capacities by Choquet [22] in which monotonicity and continuity axioms
were assumed. Sugeno [214] generalized classical measure theory to fuzzy
measure theory by replacing additivity axiom with monotonicity and semicontinuity axioms.
In order to deal with general uncertainty, self-duality plus countable
subadditivity is much more important than continuity and semicontinuity. For this reason, Liu [132] founded an uncertainty theory that is a branch
of mathematics based on normality, monotonicity, self-duality, and countable
subadditivity axioms. Uncertainty theory provides the commonness of probability theory, credibility theory and chance theory.
The emphasis in this chapter is mainly on uncertain measure, uncertainty
space, uncertain variable, uncertainty distribution, expected value, variance,
moments, independence, identical distribution, critical values, entropy, distance, convergence almost surely, convergence in measure, convergence in
mean, convergence in distribution, conditional uncertainty, uncertain process,
canonical process, uncertain calculus, and uncertain differential equation.

4.1

Uncertainty Space

Let be a nonempty set, and let L be a -algebra over . Each element L


is called an event. In order to present an axiomatic definition of uncertain
measure, it is necessary to assign to each event a number M{} which
indicates the level that will occur. In order to ensure that the number

178

Chapter 4 - Uncertainty Theory

M{} has certain mathematical properties, Liu [132] proposed the following
four axioms:

M{} = 1.
Axiom 2. (Monotonicity) M{1 } M{2 } whenever 1 2 .
Axiom 3. (Self-Duality) M{} + M{c } = 1 for any event .
Axiom 1. (Normality)

Axiom 4. (Countable Subadditivity) For every countable sequence of events


{i }, we have
( )

[
X
M
i
M{i }.
(4.1)
i=1

i=1

.
Remark 4.1: Pathology occurs if self-duality axiom is not assumed. For
example, we define a set function that takes value 1 for each set. Then it
satisfies all axioms but self-duality. Is it not strange if such a set function
serves as a measure?
Remark 4.2: Pathology occurs if subadditivity is not assumed. For example, suppose that a universal set contains 3 elements. We define a set
function that takes value 0 for each singleton, and 1 for each set with at least
2 elements. Then such a set function satisfies all axioms but subadditivity.
Is it not strange if such a set function serves as a measure?
Remark 4.3: Pathology occurs if countable subadditivity axiom is replaced
with finite subadditivity axiom. For example, assume the universal set consists of all real numbers. We define a set function that takes value 0 if the
set is bounded, 0.5 if both the set and complement are unbounded, and 1 if
the complement of the set is bounded. Then such a set function is finitely
subadditive but not countably subadditive. Is it not strange if such a set
function serves as a measure?
Definition 4.1 (Liu [132]) The set function M is called an uncertain measure if it satisfies the normality, monotonicity, self-duality, and countable
subadditivity axioms.
Example 4.1: Probability, credibility and chance measures are instances of
uncertain measure.
Example 4.2: Let Pr be a probability measure, Cr a credibility measure,
and a a number in [0, 1]. Then

M{} = a Pr{} + (1 a)Cr{}


is an uncertain measure.

(4.2)

179

Section 4.1 - Uncertainty Space

Example 4.3: Let = {1 , 2 , 3 }. For this case, there are only 8 events.
Define
M{1 } = 0.6, M{2 } = 0.3, M{3 } = 0.2,

M{1 , 2 } = 0.8, M{1 , 3 } = 0.7, M{2 , 3 } = 0.4,


M{} = 0, M{} = 1.

M is an uncertain measure because it satisfies the four axioms.


Example 4.4: Let = [1, 1] with Borel algebra L and Lebesgue measure
Then

. Then the set function

M{} =

{},

if {} < 0.5

1 { }, if {c } < 0.5
0.5,

otherwise

is an uncertain measure.
Theorem 4.1 Suppose that

M is an uncertain measure. Then we have


M{} = 0,
(4.3)
0 M{} 1

(4.4)

for any event .


Proof: It follows from the normality and self-duality axioms that M{} =
1 M{} = 1 1 = 0. It follows from the monotonicity axiom that 0
M{} 1 because .
Theorem 4.2 Suppose that M is an uncertain measure. Then for any events
1 and 2 , we have

M{1 } M{2 } M{1 2 } M{1 } + M{2 }

(4.5)

Proof: The left-hand inequality follows from the monotonicity axiom and
the right-hand inequality follows from the countable subadditivity axiom immediately.
Theorem 4.3 Let = {1 , 2 , }. If

M is an uncertain measure, then

M{i } + M{j } 1

X
k=1

for any i and j.

M{k }

(4.6)

180

Chapter 4 - Uncertainty Theory

Proof: Since

M is increasing and self-dual, we have, for any i and j,


M{i } + M{j } M{\{j }} + M{j } = 1.

Since = k {k } and

M is countably subadditive, we have

1 = M{} = M

)
{k }

k=1

M{k }.

k=1

The theorem is proved.


Uncertainty Null-Additivity Theorem
Null-additivity is a direct deduction from subadditivity. We first prove a
more general theorem.
Theorem 4.4 Let {i } be a decreasing sequence of events with
as i . Then for any event , we have
lim

M{i } 0

M{ i } = i
lim M{\i } = M{}.

(4.7)

Proof: It follows from the monotonicity and countable subadditivity axioms


that
M{} M{ i } M{} + M{i }

for each i. Thus we get M{ i }


(\i ) ((\i ) i ), we have

M{} by using M{i } 0.

Since

M{\i } M{} M{\i } + M{i }.


Hence

M{\i } M{} by using M{i } 0.

Remark 4.4: It follows from the above theorem that the uncertain measure
is null-additive, i.e., M{1 2 } = M{1 } + M{2 } if either M{1 } = 0
or M{2 } = 0. In other words, the uncertain measure remains unchanged if
the event is enlarged or reduced by an event with measure zero.
Uncertainty Asymptotic Theorem
Theorem 4.5 (Uncertainty Asymptotic Theorem) For any events 1 , 2 , ,
we have
lim M{i } > 0, if i ,
(4.8)
i

lim

M{i } < 1,

if i .

(4.9)

181

Section 4.2 - Uncertain Variables

Proof: Assume i . Since = i i , it follows from the countable


subadditivity axioms that
1 = M{}

M{i }.

i=1

Since M{i } is increasing with respect to i, we have limi M{i } > 0. If


i , then ci . It follows from the first inequality and self-duality axiom
that
lim M{i } = 1 lim M{ci } < 1.
i

The theorem is proved.


Example 4.5: Assume is the set of real numbers. Let be a number with
0 < 0.5. Define a set function as follows,

0,
if =

,
if
is upper bounded

(4.10)
M{} = 0.5, if both and c are upper unbounded

1 , if is upper bounded

1,
if = .
It is easy to verify that M is an uncertain measure. Write i = (, i] for
i = 1, 2, Then i and limi M{i } = . Furthermore, we have
ci and limi M{ci } = 1 .
Uncertainty Space
Definition 4.2 (Liu [132]) Let be a nonempty set, L a -algebra over
, and M an uncertain measure. Then the triplet (, L, M) is called an
uncertainty space.

4.2

Uncertain Variables

Definition 4.3 (Liu [132]) An uncertain variable is a measurable function


from an uncertainty space (, L, M) to the set of real numbers, i.e., for any
Borel set B of real numbers, the set

{ B} = { () B}
(4.11)
is an event.
Example 4.6: Random variable, fuzzy variable and hybrid variable are
instances of uncertain variable.

182

Chapter 4 - Uncertainty Theory

Definition 4.4 An uncertain variable on the uncertainty space (, L, M)


is said to be
(a) nonnegative if M{ < 0} = 0;
(b) positive if M{ 0} = 0;
(c) continuous if M{ = x} is a continuous function of x;
(d) simple if there exists a finite sequence {x1 , x2 , , xm } such that

M { 6= x1 , 6= x2 , , 6= xm } = 0;

(4.12)

(e) discrete if there exists a countable sequence {x1 , x2 , } such that

M { 6= x1 , 6= x2 , } = 0.
(4.13)
It is clear that 0 M{ = x} 1, and there is at most one point x0 such
that M{ = x0 } > 0.5. For a continuous uncertain variable, we always have
0 M{ = x} 0.5.
Definition 4.5 Let 1 and 2 be uncertain variables defined on the uncertainty space (, L, M). We say 1 = 2 if 1 () = 2 () for almost all .
Uncertain Vector
Definition 4.6 An n-dimensional uncertain vector is a measurable function
from an uncertainty space (, L, M) to the set of n-dimensional real vectors,
i.e., for any Borel set B of <n , the set



{ B} = () B
(4.14)
is an event.
Theorem 4.6 The vector (1 , 2 , , n ) is an uncertain vector if and only
if 1 , 2 , , n are uncertain variables.
Proof: Write = (1 , 2 , , n ). Suppose that is an uncertain vector on
the uncertainty space (, L, M). For any Borel set B of <, the set B <n1
is a Borel set of <n . Thus the set



1 () B



= 1 () B, 2 () <, , n () <



= () B <n1
is an event. Hence 1 is an uncertain variable. A similar process may
prove that 2 , 3 , , n are uncertain variables. Conversely, suppose that
all 1 , 2 , , n are uncertain variables on the uncertainty space (, L, M).
We define



B = B <n { |() B} is an event .

183

Section 4.2 - Uncertain Variables

The vector = (1 , 2 , , n ) is proved to be an uncertain vector if we can


prove that B contains all Borel sets of <n . First, the class B contains all
open intervals of <n because
(
)
n
n
Y
\




()
(ai , bi ) =
i () (ai , bi )
i=1

i=1

is an event. Next, the class B is a -algebra of <n because (i) we have <n B
since {|() <n } = ; (ii) if B B, then



() B
is an event, and


{ () B c } = { () B}c
is an event. This means that B c B; (iii) if Bi B for i = 1, 2, , then
{ |() Bi } are events and
(
)

[
[


()
Bi =
{ () Bi }
i=1

i=1

is an event. This means that i Bi B. Since the smallest -algebra containing all open intervals of <n is just the Borel algebra of <n , the class B
contains all Borel sets of <n . The theorem is proved.
Uncertain Arithmetic
Definition 4.7 Suppose that f : <n < is a measurable function, and
1 , 2 , , n uncertain variables on the uncertainty space (, L, M). Then
= f (1 , 2 , , n ) is an uncertain variable defined as
() = f (1 (), 2 (), , n ()),

(4.15)

Example 4.7: Let 1 and 2 be two uncertain variables. Then the sum
= 1 + 2 is an uncertain variable defined by
() = 1 () + 2 (),

The product = 1 2 is also an uncertain variable defined by


() = 1 () 2 (),

The reader may wonder whether (1 , 2 , , n ) defined by (4.15) is an


uncertain variable. The following theorem answers this question.
Theorem 4.7 Let be an n-dimensional uncertain vector, and f : <n <
a measurable function. Then f () is an uncertain variable.

184

Chapter 4 - Uncertainty Theory

Proof: Assume that is an uncertain vector on the uncertainty space


(, L, M). For any Borel set B of <, since f is a measurable function, the
f 1 (B) is a Borel set of <n . Thus the set





f (()) B = () f 1 (B)
is an event for any Borel set B. Hence f () is an uncertain variable.

4.3

Identification Function

As we know, a random variable may be characterized by a probability density


function, and a fuzzy variable may be described by a membership function.
This section will introduce an identification function to characterize an uncertain variable.
Definition 4.8 An uncertain variable is said to have an identification
function (, ) if
(i) (x) is a nonnegative function and (x) is a nonnegative and integrable
function such that
Z
sup (x) +
x<

(x)dx = 1;

(4.16)

<

(ii) for any Borel set B of real numbers, we have



 Z
M{ B} = 21 sup (x) + sup (x) supc (x) + (x)dx.
xB
xB
x<
B

(4.17)

Remark 4.5: It is not true that all uncertain variables have their own
identification functions.
Remark 4.6: The uncertain variable with identification function (, ) is
essentially a fuzzy variable if
sup (x) = 1.
x<

For this case, is a membership function and 0, a.e.


Remark 4.7: The uncertain variable with identification function (, ) is
essentially a random variable if
Z
(x)dx = 1.
<

For this case, is a probability density function and 0.


Remark 4.8: Let be an uncertain variable with identification function
(, ). If (x) is a continuous function, then we have

M{ = x} = (x)
,
2

x <.

(4.18)

185

Section 4.4 - Uncertainty Distribution

Theorem 4.8 Suppose (x) is a nonnegative function and (x) is a nonnegative and integrable function satisfying (4.16). Then there is an uncertain
variable such that (4.17) holds.
Proof: Let < be the universal set. For each Borel set B of real numbers, we
define a set function

 Z
M{B} = 12 sup (x) + sup (x) supc (x) + (x)dx.
xB
xB
x<
B
It is clear that M is normal, increasing, self-dual, and countably subadditive.
That is, the set function M is indeed an uncertain measure. Now we define
an uncertain variable as an identity function from the uncertainty space
(<, A, M) to <. We may verify that meets (4.17). The theorem is proved.

4.4

Uncertainty Distribution

Definition 4.9 (Liu [132]) The uncertainty distribution : < [0, 1] of an


uncertain variable is defined by



(x) = M () x .
(4.19)
Theorem 4.9 An uncertainty distribution is an increasing function such
that
0 lim (x) < 1, 0 < lim (x) 1.
(4.20)
x

x+

Proof: It is obvious that an uncertainty distribution is an increasing


function, and the inequalities (4.20) follow from the uncertainty asymptotic
theorem immediately.
Definition 4.10 A continuous uncertain variable is said to be (a) singular
if its uncertainty distribution is a singular function; (b) absolutely continuous
if its uncertainty distribution is absolutely continuous.
Definition 4.11 (Liu [132]) The uncertainty density function : < [0, +)
of an uncertain variable is a function such that
Z x
(x) =
(y)dy, x <,
(4.21)

(y)dy = 1

where is the uncertainty distribution of .

(4.22)

186

Chapter 4 - Uncertainty Theory

Example 4.8: The uncertainty density function may not exist even if the
uncertainty distribution is continuous and differentiable a.e. Suppose f is the
Cantor function, and set

if x < 0

0,
f (x), if 0 x 1
(4.23)
(x) =

1,
if x > 1.
Then is an increasing and continuous function, and is an uncertainty distribution. Note that 0 (x) = 0 almost everywhere, and
Z +
0 (x)dx = 0 6= 1.

Thus the uncertainty density function does not exist.


Theorem 4.10 Let be an uncertain variable whose uncertainty density
function exists. Then we have
Z x
Z +
M{ x} =
(y)dy, M{ x} =
(y)dy.
(4.24)

Proof: The first part follows immediately from the definition. In addition,
by the self-duality of uncertain measure, we have
Z +
Z x
Z +
M{ x} = 1 M{ < x} =
(y)dy
(y)dy =
(y)dy.

The theorem is proved.


Joint Uncertainty Distribution
Definition 4.12 Let (1 , 2 , , n ) be an uncertain vector. Then the joint
uncertainty distribution : <n [0, 1] is defined by



(x1 , x2 , , xn ) = M 1 () x1 , 2 () x2 , , n () xn .
Definition 4.13 The joint uncertainty density function : <n [0, +)
of an uncertain vector (1 , 2 , , n ) is a function such that
Z x1 Z x2
Z xn
(x1 , x2 , , xn ) =

(y1 , y2 , , yn )dy1 dy2 dyn

holds for all (x1 , x2 , , xn ) <n , and


Z + Z +
Z +

(y1 , y2 , , yn )dy1 dy2 dyn = 1

where is the joint uncertainty distribution of (1 , 2 , , n ).

187

Section 4.5 - Expected Value

4.5

Expected Value

Expected value is the average value of uncertain variable in the sense of


uncertain measure, and represents the size of uncertain variable.
Definition 4.14 (Liu [132]) Let be an uncertain variable. Then the expected value of is defined by
Z

E[] =
0

M{ r}dr

M{ r}dr

(4.25)

provided that at least one of the two integrals is finite.


Theorem 4.11 Let be an uncertain variable whose uncertainty density
function exists. If the Lebesgue integral
Z

x(x)dx

is finite, then we have


+

Z
E[] =

x(x)dx.

(4.26)

Proof: It follows from the definition of expected value operator and Fubini
Theorem that
Z +
Z 0
E[] =
M{ r}dr
M{ r}dr

Z

Z

=
0


Z
(x)dx dr

Z

r
x

0
+


(x)dr dx

x(x)dx +

x(x)dx


(x)dx dr

Z

(x)dr dx

x(x)dx.

The theorem is proved.


Theorem 4.12 Let be an uncertain variable with uncertainty distribution
. If
lim (x) = 0,
lim (x) = 1
x

188

Chapter 4 - Uncertainty Theory

and the Lebesgue-Stieltjes integral


Z +
xd(x)

is finite, then we have


Z

xd(x).

E[] =

(4.27)

R +
Proof: Since the Lebesgue-Stieltjes integral xd(x) is finite, we immediately have
Z 0
Z 0
Z y
Z +
xd(x) =
xd(x)
xd(x) =
xd(x),
lim
lim
y+

and
Z

lim

y+

Z
xd(x) = 0,

lim

xd(x) = 0.

It follows from


Z +
xd(x) y
lim (z) (y) = y (1 (y)) 0,
z+



xd(x) y (y) lim (z) = y(y) 0,
z

for y > 0,

for y < 0

that
lim y (1 (y)) = 0,

y+

lim y(y) = 0.

Let 0 = x0 < x1 < x2 < < xn = y be a partition of [0, y]. Then we have
n1
X

xd(x)
0

i=0

and

n1
X

Z
xi ((xi+1 ) (xi ))

Z
(1 (xi+1 ))(xi+1 xi )
0

i=0

M{ r}dr

as max{|xi+1 xi | : i = 0, 1, , n 1} 0. Since
n1
X

xi ((xi+1 ) (xi ))

i=0

n1
X

(1 (xi+1 )(xi+1 xi ) = y((y) 1) 0

i=0

as y +. This fact implies that


Z +
Z
M{ r}dr =
0

xd(x).

189

Section 4.5 - Expected Value

A similar way may prove that


Z
Z 0
M{ r}dr =

xd(x).

It follows that the equation (4.27) holds.


Theorem 4.13 Let be an uncertain variable with finite expected value.
Then for any real numbers a and b, we have
E[a + b] = aE[] + b.

(4.28)

Proof: Step 1: We first prove that E[ + b] = E[] + b for any real number
b. If b 0, we have
Z +
Z 0
E[ + b] =
M{ + b r}dr
M{ + b r}dr

Z
=
0

M{ r b}dr
Z

= E[] +
0

M{ r b}dr

(M{ r b} + M{ < r b})dr

= E[] + b.
If b < 0, then we have
Z 0
E[a + b] = E[]
(M{ r b} + M{ < r b})dr = E[] + b.
b

Step 2: We prove E[a] = aE[]. If a = 0, then the equation E[a] =


aE[] holds trivially. If a > 0, we have
Z +
Z 0
E[a] =
M{a r}dr
M{a r}dr
0

Z
=
0

M{ r/a}dr

=a
0

Z 0

M{ t}dt a

M{ r/a}dr
M{ t}dt

= aE[].
If a < 0, we have
+

Z
E[a] =
0

Z
=
0

M{a r}dr

M{ r/a}dr

=a
0

= aE[].

M{ t}dt a

Z 0

M{a r}dr

Z 0

M{ r/a}dr
M{ t}dt

190

Chapter 4 - Uncertainty Theory

Finally, for any real numbers a and b, it follows from Steps 1 and 2 that the
theorem holds.
Theorem 4.14 Let f be a convex function on [a, b], and an uncertain
variable that takes values in [a, b] and has expected value e. Then
E[f ()]

be
ea
f (a) +
f (b).
ba
ba

(4.29)

Proof: For each , we have a () b and


() =

b ()
() a
a+
b.
ba
ba

It follows from the convexity of f that


f (())

() a
b ()
f (a) +
f (b).
ba
ba

Taking expected values on both sides, we obtain the inequality.

4.6

Variance

The variance of an uncertain variable provides a measure of the spread of the


distribution around its expected value. A small value of variance indicates
that the uncertain variable is tightly concentrated around its expected value;
and a large value of variance indicates that the uncertain variable has a wide
spread around its expected value.
Definition 4.15 (Liu [132]) Let be an uncertain variable with finite expected value e. Then the variance of is defined by V [] = E[( e)2 ].
Theorem 4.15 If is an uncertain variable with finite expected value, a and
b are real numbers, then V [a + b] = a2 V [].
Proof: It follows from the definition of variance that


V [a + b] = E (a + b aE[] b)2 = a2 E[( E[])2 ] = a2 V [].
Theorem 4.16 Let be an uncertain variable with expected value e. Then
V [] = 0 if and only if M{ = e} = 1.
Proof: If V [] = 0, then E[( e)2 ] = 0. Note that
Z +
E[( e)2 ] =
M{( e)2 r}dr
0

which implies

M{( e)2 r} = 0 for any r > 0. Hence we have


M{( e)2 = 0} = 1.

191

Section 4.7 - Moments

That is, M{ = e} = 1.
Conversely, if M{ = e} = 1, then we have
M{( e)2 r} = 0 for any r > 0. Thus
Z

V [] =
0

M{( e)2

= 0} = 1 and

M{( e)2 r}dr = 0.

The theorem is proved.


Theorem 4.17 Let be an uncertain variable that takes values in [a, b] and
has expected value e. Then
V [] (e a)(b e).

(4.30)

Proof: It follows from Theorem 4.14 immediately by defining f (x) = (xe)2 .

4.7

Moments

Definition 4.16 (Liu [132]) Let be an uncertain variable. Then for any
positive integer k,
(a) the expected value E[ k ] is called the kth moment;
(b) the expected value E[||k ] is called the kth absolute moment;
(c) the expected value E[( E[])k ] is called the kth central moment;
(d) the expected value E[| E[]|k ] is called the kth absolute central moment.
Note that the first central moment is always 0, the first moment is just
the expected value, and the second central moment is just the variance.
Theorem 4.18 Let be a nonnegative uncertain variable, and k a positive
number. Then the k-th moment
Z +
k
E[ ] = k
rk1 M{ r}dr.
(4.31)
0

Proof: It follows from the nonnegativity of that


Z
Z
Z
E[ k ] =
M{ k x}dx = M{ r}drk = k
0

rk1 M{ r}dr.

The theorem is proved.


Theorem 4.19 Let be an uncertain variable, and t a positive number. If
E[||t ] < , then
lim xt M{|| x} = 0.
(4.32)
x

Conversely, if (4.32) holds for some positive number t, then E[||s ] < for
any 0 s < t.

192

Chapter 4 - Uncertainty Theory

Proof: It follows from the definition of expected value operator that


+

E[||t ] =

M{||t r}dr < .

Thus we have
+

Z
lim

xt /2

M{||t r}dr = 0.

The equation (4.32) is proved by the following relation,


Z

M{||

xt /2

xt

r}dr
xt /2

M{||t r}dr 12 xt M{|| x}.

Conversely, if (4.32) holds, then there exists a number a such that


xt M{|| x} 1, x a.
Thus we have
E[||s ] =

=
0

M{||s r}dr +

M{||s r}dr +

M{||

M{||s r}dr

srs1 M{|| r}dr

Z
r}dr + s

rst1 dr


< +.

Z
by


r dr < for any p < 1
p

The theorem is proved.


Theorem 4.20 Let be an uncertain variable that takes values in [a, b] and
has expected value e. Then for any positive integer k, the kth absolute moment
and kth absolute central moment satisfy the following inequalities,
E[||k ]

E[| e|k ]

be k ea k
|a| +
|b| ,
ba
ba

be
ea
(e a)k +
(b e)k .
ba
ba

(4.33)

(4.34)

Proof: It follows from Theorem 4.14 immediately by defining f (x) = |x|k


and f (x) = |x e|k .

193

Section 4.9 - Identical Distribution

4.8

Independence

Definition 4.17 (Liu [132]) The uncertain variables 1 , 2 , , n are said


to be independent if
" n
#
n
X
X
E
fi (i ) =
E[fi (i )]
(4.35)
i=1

i=1

for any measurable functions f1 , f2 , , fn provided that the expected values


exist and are finite.
What are the sufficient conditions for independence of uncertain variables?
This is an open problem.
Theorem 4.21 If and are independent uncertain variables with finite
expected values, then we have
E[a + b] = aE[] + bE[]

(4.36)

for any real numbers a and b.


Proof: The theorem follows from the definition by defining f1 (x) = ax and
f2 (x) = bx.
Theorem 4.22 Suppose that 1 , 2 , , n are independent uncertain variables, and f1 , f2 , , fn are measurable functions. Then the uncertain variables f1 (1 ), f2 (2 ), , fn (n ) are independent.
Proof: The theorem follows from the definition because the compound of
measurable functions is also measurable.

4.9

Identical Distribution

This section introduces the concept of identical distribution of uncertain variables.


Definition 4.18 The uncertain variables and are identically distributed
if
M{ B} = M{ B}
(4.37)
for any Borel set B of real numbers.
Theorem 4.23 Let and be identically distributed uncertain variables,
and f : < < a measurable function. Then f () and f () are identically
distributed uncertain variables.

194

Chapter 4 - Uncertainty Theory

Proof: For any Borel set B of real numbers, we have

M{f () B} = M{ f 1 (B)} = M{ f 1 (B)} = M{f () B}.


Hence f () and f () are identically distributed uncertain variables.
Theorem 4.24 If and are identically distributed uncertain variables,
then they have the same uncertainty distribution.
Proof: Since and are identically distributed uncertain variables, we have
Thus and have the same
uncertainty distribution.

M{ (, x]} = M{ (, x]} for any x.

Theorem 4.25 If and are identically distributed uncertain variables


whose uncertainty density functions exist, then they have the same uncertainty density function.
Proof: It follows from Theorem 4.24 immediately.

4.10

Critical Values

Definition 4.19 (Liu [132]) Let be an uncertain variable, and (0, 1].
Then


sup () = sup r M { r}
(4.38)
is called the -optimistic value to , and


inf () = inf r M { r}

(4.39)

is called the -pessimistic value to .


Theorem 4.26 Let be an uncertain variable and a number between 0
and 1. We have
(a) if c 0, then (c)sup () = csup () and (c)inf () = cinf ();
(b) if c < 0, then (c)sup () = cinf () and (c)inf () = csup ().
Proof: (a) If c = 0, then the part (a) is obvious. In the case of c > 0, we
have

(c)sup () = sup{r M{c r} }
= c sup{r/c |

M{ r/c} }

= csup ().
A similar way may prove (c)inf () = cinf (). In order to prove the part (b),
it suffices to prove that ()sup () = inf () and ()inf () = sup ().
In fact, we have

()sup () = sup{r M{ r} }
= inf{r |

M{ r} }

= inf ().
Similarly, we may prove that ()inf () = sup (). The theorem is proved.

195

Section 4.11 - Entropy

Theorem 4.27 Let be an uncertain variable. Then we have


(a) if > 0.5, then inf () sup ();
(b) if 0.5, then inf () sup ().

Proof: Part (a): Write ()


= (inf () + sup ())/2. If inf () < sup (),
then we have

1 M{ < ()}
+ M{ > ()}
+ > 1.
A contradiction proves inf () sup ().
Part (b): Assume that inf () > sup (). It follows from the definition

of inf () that M{ ()}


< . Similarly, it follows from the definition of

sup () that M{ ()} < . Thus

1 M{ ()}
+ M{ ()}
< + 1.

A contradiction proves inf () sup (). The theorem is verified.


Theorem 4.28 Let be an uncertain variable. Then sup () is a decreasing
function of , and inf () is an increasing function of .
Proof: It follows from the definition immediately.

4.11

Entropy

This section provides a definition of entropy to characterize the uncertainty


of uncertain variables resulting from information deficiency.
Definition 4.20 (Liu [132]) Suppose that is a discrete uncertain variable
taking values in {x1 , x2 , }. Then its entropy is defined by
H[] =

S(M{ = xi })

(4.40)

i=1

where S(t) = t ln t (1 t) ln(1 t).


Example 4.9: Suppose that is a discrete uncertain variable taking values
in {x1 , x2 , }. If there exists some index k such that M{ = xk } = 1, and
0 otherwise, then its entropy H[] = 0.
Example 4.10: Suppose that is a simple uncertain variable taking values
in {x1 , x2 , , xn }. If M{ = xi } = 0.5 for all i = 1, 2, , n, then its
entropy H[] = n ln 2.
Theorem 4.29 Suppose that is a discrete uncertain variable taking values
in {x1 , x2 , }. Then
H[] 0
(4.41)
and equality holds if and only if is essentially a deterministic/crisp number.

196

Chapter 4 - Uncertainty Theory

Proof: The nonnegativity is clear. In addition, H[] = 0 if and only if

M{ = xi } = 0 or 1 for each i. That is, there exists one and only one index k
such that M{ = xk } = 1, i.e., is essentially a deterministic/crisp number.

This theorem states that the entropy of an uncertain variable reaches its
minimum 0 when the uncertain variable degenerates to a deterministic/crisp
number. In this case, there is no uncertainty.
Theorem 4.30 Suppose that is a simple uncertain variable taking values
in {x1 , x2 , , xn }. Then
H[] n ln 2
and equality holds if and only if

(4.42)

M{ = xi } = 0.5 for all i = 1, 2, , n.

Proof: Since the function S(t) reaches its maximum ln 2 at t = 0.5, we have
H[] =

n
X

S(M{ = xi }) n ln 2

i=1

and equality holds if and only if

M{ = xi } = 0.5 for all i = 1, 2, , n.

This theorem states that the entropy of an uncertain variable reaches its
maximum when the uncertain variable is an equipossible one. In this case,
there is no preference among all the values that the uncertain variable will
take.

4.12

Distance

Definition 4.21 (Liu [132]) The distance between uncertain variables and
is defined as
d(, ) = E[| |].

(4.43)

Theorem 4.31 Let , , be uncertain variables, and let d(, ) be the distance. Then we have
(a) (Nonnegativity) d(, ) 0;
(b) (Identification) d(, ) = 0 if and only if = ;
(c) (Symmetry) d(, ) = d(, );
(d) (Triangle Inequality) d(, ) 2d(, ) + 2d(, ).
Proof: The parts (a), (b) and (c) follow immediately from the definition.
Now we prove the part (d). It follows from the countable subadditivity axiom

197

Section 4.13 - Inequalities

that
Z

M {| | r} dr

M {| | + | | r} dr

M {{| | r/2} {| | r/2}} dr

(M{| | r/2} + M{| | r/2}) dr

M{| | r/2}dr +

d(, ) =
0

Z
=
0

M{| | r/2}dr

= 2E[| |] + 2E[| |] = 2d(, ) + 2d(, ).

Example 4.11: Let = {1 , 2 , 3 }. Define M{} = 0, M{} = 1 and


M{} = 1/2 for any subset (excluding and ). We set uncertain variables
, and as follows,
(
() =

1, if 6= 3
0, otherwise,

(
() =

1, if 6= 1
0, otherwise,

() 0.

It is easy to verify that d(, ) = d(, ) = 1/2 and d(, ) = 3/2. Thus

d(, ) =

4.13

3
(d(, ) + d(, )).
2

Inequalities

Theorem 4.32 (Liu [132]) Let be an uncertain variable, and f a nonnegative function. If f is even and increasing on [0, ), then for any given
number t > 0, we have
()]
.
M{|| t} E[f
f (t)

(4.44)

198

Chapter 4 - Uncertainty Theory

Proof: It is clear that M{|| f 1 (r)} is a monotone decreasing function


of r on [0, ). It follows from the nonnegativity of f () that
Z

M{f () r}dr

M{|| f 1 (r)}dr

f (t)

M{|| f 1 (r)}dr

f (t)

dr M{|| f 1 (f (t))}

E[f ()] =
0

Z
=
0

= f (t) M{|| t}
which proves the inequality.
Theorem 4.33 (Liu [132], Markov Inequality) Let be an uncertain variable. Then for any given numbers t > 0 and p > 0, we have
p

]
M{|| t} E[||
.
tp

(4.45)

Proof: It is a special case of Theorem 4.32 when f (x) = |x|p .


Theorem 4.34 (Liu [132], Chebyshev Inequality) Let be an uncertain variable whose variance V [] exists. Then for any given number t > 0, we have

M {| E[]| t} Vt[]
.
2

(4.46)

Proof: It is a special case of Theorem 4.32 when the uncertain variable is


replaced with E[], and f (x) = x2 .
Theorem 4.35 (Liu [132], H
olders Inequality) Let p and q be positive real
numbers with 1/p + 1/q = 1, and let and be independent uncertain variables with E[||p ] < and E[||q ] < . Then we have
p
p
E[||] p E[||p ] q E[||q ].
(4.47)
Proof: The inequality holds trivially if at least one of and is zero a.s. Now
we assumeE[||p ] > 0 and E[||q ] > 0. It is easy to prove that the function

f (x, y) = p x q y is a concave function on D = {(x, y) : x 0, y 0}. Thus


for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real numbers
a and b such that
f (x, y) f (x0 , y0 ) a(x x0 ) + b(y y0 ),

(x, y) D.

199

Section 4.14 - Convergence Concepts

Letting x0 = E[||p ], y0 = E[||q ], x = ||p and y = ||q , we have


f (||p , ||q ) f (E[||p ], E[||q ]) a(||p E[||p ]) + b(||q E[||q ]).
Taking the expected values on both sides, we obtain
E[f (||p , ||q )] f (E[||p ], E[||q ]).
Hence the inequality (4.47) holds.
Theorem 4.36 (Liu [132], Minkowski Inequality) Let p be a real number
with p 1, and let and be independent uncertain variables with E[||p ] <
and E[||p ] < . Then we have
p
p
p
p
E[| + |p ] p E[||p ] + p E[||p ].
(4.48)
Proof: The inequality holds trivially if at least one of and is zero a.s. Now
we assume
E[||p ] > 0 and E[||p ] > 0. It is easy to prove that the function

f (x, y) = ( p x + p y)p is a concave function on D = {(x, y) : x 0, y 0}.


Thus for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real
numbers a and b such that
f (x, y) f (x0 , y0 ) a(x x0 ) + b(y y0 ),

(x, y) D.

Letting x0 = E[||p ], y0 = E[||p ], x = ||p and y = ||p , we have


f (||p , ||p ) f (E[||p ], E[||p ]) a(||p E[||p ]) + b(||p E[||p ]).
Taking the expected values on both sides, we obtain
E[f (||p , ||p )] f (E[||p ], E[||p ]).
Hence the inequality (4.48) holds.
Theorem 4.37 (Liu [132], Jensens Inequality) Let be an uncertain variable, and f : < < a convex function. If E[] and E[f ()] are finite, then
f (E[]) E[f ()].

(4.49)

Especially, when f (x) = |x|p and p 1, we have |E[]|p E[||p ].


Proof: Since f is a convex function, for each y, there exists a number k such
that f (x) f (y) k (x y). Replacing x with and y with E[], we obtain
f () f (E[]) k ( E[]).
Taking the expected values on both sides, we have
E[f ()] f (E[]) k (E[] E[]) = 0
which proves the inequality.

200

4.14

Chapter 4 - Uncertainty Theory

Convergence Concepts

We have the following four convergence concepts of uncertain sequence: convergence almost surely (a.s.), convergence in measure, convergence in mean,
and convergence in distribution.
Table 4.1: Relationship among Convergence Concepts
Convergence
in Mean

Convergence

in Measure

Convergence
in Distribution

Definition 4.22 (Liu [132]) Suppose that , 1 , 2 , are uncertain variables defined on the uncertainty space (, L, M). The sequence {i } is said
to be convergent a.s. to if there exists an event with M{} = 1 such that
lim |i () ()| = 0

(4.50)

for every . In that case we write i , a.s.


Definition 4.23 (Liu [132]) Suppose that , 1 , 2 , are uncertain variables. We say that the sequence {i } converges in measure to if
lim

M {|i | } = 0

(4.51)

for every > 0.


Definition 4.24 (Liu [132]) Suppose that , 1 , 2 , are uncertain variables with finite expected values. We say that the sequence {i } converges in
mean to if
lim E[|i |] = 0.
(4.52)
i

In addition, the sequence {i } is said to converge in mean square to if


lim E[|i |2 ] = 0.

(4.53)

Definition 4.25 (Liu [132]) Suppose that , 1 , 2 , are the uncertainty


distributions of uncertain variables , 1 , 2 , , respectively. We say that
{i } converges in distribution to if i at any continuity point of .
Theorem 4.26 (Liu [132]) Suppose that , 1 , 2 , are uncertain variables.
If {i } converges in mean to , then {i } converges in measure to .
Proof: It follows from the Markov inequality that for any given number
> 0, we have
M{|i | } E[|i |] 0
as i . Thus {i } converges in measure to . The theorem is proved.

Section 4.15 - Conditional Uncertainty

201

Theorem 4.27 (Liu [132]) Suppose , 1 , 2 , are uncertain variables. If


{i } converges in measure to , then {i } converges in distribution to .
Proof: Let x be a given continuity point of the uncertainty distribution .
On the one hand, for any y > x, we have
{i x} = {i x, y} {i x, > y} { y} {|i | y x}.
It follows from the countable subadditivity axiom that
i (x) (y) + M{|i | y x}.
Since {i } converges in measure to , we have M{|i | y x} 0 as
i . Thus we obtain lim supi i (x) (y) for any y > x. Letting
y x, we get
lim sup i (x) (x).
(4.54)
i

On the other hand, for any z < x, we have


{ z} = {i x, z} {i > x, z} {i x} {|i | x z}
which implies that
(z) i (x) + M{|i | x z}.
Since M{|i | x z} 0, we obtain (z) lim inf i i (x) for any
z < x. Letting z x, we get
(x) lim inf i (x).
i

(4.55)

It follows from (4.54) and (4.55) that i (x) (x). The theorem is proved.

4.15

Conditional Uncertainty

We consider the uncertain measure of an event A after it has been learned


that some other event B has occurred. This new uncertain measure of A is
called the conditional uncertain measure of A given B.
In order to define a conditional uncertain measure M{A|B}, at first we
have to enlarge M{A B} because M{A B} < 1 for all events whenever
M{B} < 1. It seems that we have no alternative but to divide M{A B} by
M{B}. Unfortunately, M{AB}/M{B} is not always an uncertain measure.
However, the value M{A|B} should not be greater than M{A B}/M{B}
(otherwise the normality will be lost), i.e.,
{A B}
M{A|B} MM
.
{B}

(4.56)

202

Chapter 4 - Uncertainty Theory

On the other hand, in order to preserve the self-duality, we should have


B}
M{A|B} = 1 M{Ac |B} 1 M{A
M{B} .
c

(4.57)

Furthermore, since (A B) (Ac B) = B, we have M{B} M{A B} +


M{Ac B} by using the countable subadditivity axiom. Thus

M{Ac B} M{A B} 1.
(4.58)
M{B}
M{B}
Hence any numbers between 1 M{Ac B}/M{B} and M{AB}/M{B} are
01

reasonable values that the conditional uncertain measure may take. Based
on the maximum uncertainty principle, we have the following conditional
uncertain measure.

Definition 4.28 (Liu [132]) Let (, L, M) be an uncertainty space, and A, B


L. Then the conditional uncertain measure of A given B is defined by

M{A B} , if M{A B} < 0.5

M{B}
M{B}

c
M{A|B} = 1 M{A B} , if M{Ac B} < 0.5
(4.59)

M
{B}
M
{B}

provided that

0.5,

otherwise

M{B} > 0.

It follows immediately from the definition of conditional uncertain measure that


M{Ac B} M{A|B} M{A B} .
1
(4.60)
M{B}
M{B}
Furthermore, the conditional uncertain measure obeys the maximum uncertainty principle, and takes values as close to 0.5 as possible.
Remark 4.9: Conditional uncertain measure coincides with conditional
probability, conditional credibility, and conditional chance.
Theorem 4.38 (Liu [132]) Let (, L, M) be an uncertainty space, and B
an event with M{B} > 0. Then M{|B} defined by (4.59) is an uncertain
measure, and (, L, M{|B}) is an uncertainty space.
Proof: It is sufficient to prove that M{|B} satisfies the normality, monotonicity, self-duality and countable subadditivity axioms. At first, it satisfies
the normality axiom, i.e.,
c
M{} = 1.
B}
=1
M{|B} = 1 M{
M{B}
M{B}

203

Section 4.15 - Conditional Uncertainty

For any events A1 and A2 with A1 A2 , if

M{A1 B} M{A2 B} < 0.5,


M{B}
M{B}
then

M{A2 B}
1 B}
M{A1 |B} = M{A
M{B} M{B} = M{A2 |B}.

If

M{A1 B} 0.5 M{A2 B} ,


M{B}
M{B}
then M{A1 |B} 0.5 M{A2 |B}. If
M{A1 B} M{A2 B} ,
0.5 <
M{B}
M{B}
then we have

M{A1 |B} = 1

M{Ac1 B} 0.5 1 M{Ac2 B} 0.5 = M{A |B}.


2
M{B}
M{B}
This means that M{|B} satisfies the monotonicity axiom. For any event A,
if
M{A B} 0.5, M{Ac B} 0.5,
M{B}
M{B}
c
then we have M{A|B} + M{A |B} = 0.5 + 0.5 = 1 immediately. Otherwise,

without loss of generality, suppose

M{A B} < 0.5 < M{Ac B} ,


M{B}
M{B}
then we have


M
{A B}
M
{A B}
M{A|B} + M{A |B} = M{B} + 1 M{B} = 1.
That is, M{|B} satisfies the self-duality axiom. Finally, for any countable
sequence {Ai } of events, if M{Ai |B} < 0.5 for all i, it follows from the
c

countable subadditivity axiom that


(
)

X
[
(
) M
Ai B
M{Ai B} X

[
i=1
i=1
M
Ai B

=
M{Ai |B}.
M{B}
M{B}
i=1
i=1
Suppose there is one term greater than 0.5, say

M{A1 |B} 0.5, M{Ai |B} < 0.5,

i = 2, 3,

204
If

Chapter 4 - Uncertainty Theory

M{i Ai |B} = 0.5, then we immediately have


M

(
[

)
Ai B

i=1

M{Ai |B}.

i=1

If M{i Ai |B} > 0.5, we may prove the above inequality by the following
facts:
!

[
\
Ac1 B
(Ai B)
Aci B ,
i=2

{Ac1

B}

i=1

M{Ai B} + M

(
\

i=2

(
[

i=1

)
Ai |B

)
Aci

=1

i=1

(
\

)
Aci

i=1

M{B}

M{Ai B}
M
{Ac1 B}
i=2
M{Ai |B} 1 M{B} + M{B} .
i=1

If there are at least two terms greater than 0.5, then the countable subadditivity is clearly true. Thus M{|B} satisfies the countable subadditivity
axiom. Hence M{|B} is an uncertain measure. Furthermore, (, L, M{|B})
is an uncertainty space.
Example 4.12: Let and be two uncertain variables. Then we have

M { = x| = y} =

provided that

M{ = x, = y} ,
M{ = y}
M{ 6= x, = y} ,
1
M{ = y}
0.5,

M{ = x, = y} < 0.5
M{ = y}
M{ 6= x, = y} < 0.5
if
M{ = y}
if

otherwise

M{ = y} > 0.

Definition 4.29 (Liu [132]) The conditional uncertainty distribution : <


[0, 1] of an uncertain variable given B is defined by
(x|B) = M { x|B}
provided that

M{B} > 0.

(4.61)

205

Section 4.16 - Uncertain Process

Example 4.13: Let and be uncertain variables. Then the conditional


uncertainty distribution of given = y is

(x| = y) =

provided that

M{ x, = y} < 0.5
M{ = y}
M{ > x, = y} < 0.5
if
M{ = y}

M{ x, = y} ,
M{ = y}
M{ > x, = y} ,
1
M{ = y}

if

0.5,

otherwise

M{ = y} > 0.

Definition 4.30 (Liu [132]) The conditional uncertainty density function


of an uncertain variable given B is a nonnegative function such that
Z x
(x|B) =
(y|B)dy, x <,
(4.62)

(y|B)dy = 1

(4.63)

where (x|B) is the conditional uncertainty distribution of given B.


Definition 4.31 (Liu [132]) Let be an uncertain variable. Then the conditional expected value of given B is defined by
Z
E[|B] =
0

M{ r|B}dr

M{ r|B}dr

(4.64)

provided that at least one of the two integrals is finite.


Following conditional uncertain measure and conditional expected value,
we also have conditional variance, conditional moments, conditional critical
values, conditional entropy as well as conditional convergence.

4.16

Uncertain Process

Definition 4.32 (Liu [133]) Let T be an index set and let (, L, M) be an


uncertainty space. An uncertain process is a measurable function from T
(, L, M) to the set of real numbers, i.e., for each t T and any Borel set B
of real numbers, the set

{ X(t, ) B}
(4.65)
is an event.

206

Chapter 4 - Uncertainty Theory

That is, an uncertain process Xt () is a function of two variables such


that the function Xt () is an uncertain variable for each t . For each fixed
, the function Xt ( ) is called a sample path of the uncertain process. An
uncertain process Xt () is said to be sample-continuous if the sample path
is continuous for almost all .
Definition 4.33 (Liu [133]) An uncertain process Xt is said to have independent increments if
Xt1 Xt0 , Xt2 Xt1 , , Xtk Xtk1

(4.66)

are independent uncertain variables for any times t0 < t1 < < tk . An
uncertain process Xt is said to have stationary increments if, for any given
t > 0, the increments Xs+t Xs are identically distributed uncertain variables
for all s > 0.
Uncertain Renewal Process
Definition 4.34 (Liu [133]) Let 1 , 2 , be iid positive uncertain variables.
Define S0 = 0 and Sn = 1 + 2 + + n for n 1. Then the uncertain
process


Nt = max n Sn t
(4.67)
n0

is called an uncertain renewal process.


If 1 , 2 , denote the interarrival times of successive events. Then Sn
can be regarded as the waiting time until the occurrence of the nth event,
and Nt is the number of renewals in (0, t]. Each sample path of Nt is a
right-continuous and increasing step function taking only nonnegative integer
values. Furthermore, the size of each jump of Nt is always 1. In other words,
Nt has at most one renewal at each time. In particular, Nt does not jump at
time 0. Since Nt n is equivalent to Sn t, we immediately have

M{Nt n} = M{Sn t}.

(4.68)

Theorem 4.39 (Liu [133]) Let Nt be an uncertain renewal process. Then


we have

X
E[Nt ] =
M{Sn t}.
(4.69)
n=1

Proof: Since Nt takes only nonnegative integer values, we have


Z
Z n
X
E[Nt ] =
M{Nt r}dr =
M{Nt r}dr
0

X
n=1

The theorem is proved.

M{Nt n} =

n=1

n1

M{Sn t}.

n=1

Section 4.16 - Uncertain Process

207

Canonical Process
Definition 4.35 (Liu [133]) An uncertain process Wt is said to be a canonical process if
(i) W0 = 0 and Wt is sample-continuous,
(ii) Wt has stationary and independent increments,
(iii) W1 is an uncertain variable with expected value 0 and variance 1.
Theorem 4.40 (Existence Theorem) There is a canonical process.
Proof: In fact, standard Brownian motion and standard C process are instances of canonical process.
Example 4.14: Let Bt be a standard Brownian motion, and Ct a standard
C process. Then for each number a [0, 1], we may verify that
Wt = aBt + (1 a)Ct

(4.70)

is a canonical process.
Theorem 4.41 Let Wt be a canonical process. Then E[Wt ] = 0 for any t.
Proof: Let f (t) = E[Wt ]. Then for any times t1 and t2 , by using the
property of stationary and independent increments, we obtain
f (t1 + t2 ) = E[Wt1 +t2 ] = E[Wt1 +t2 Wt2 + Wt2 W0 ]
= E[Wt1 ] + E[Wt2 ] = f (t1 ) + f (t2 )
which implies that there is a constant e such that f (t) = et. The theorem is
proved via f (1) = 0.
Theorem 4.42 Let Wt be a canonical process. If for any t1 and t2 , we have
V [Wt1 +t2 ] = V [Wt1 ] + V [Wt2 ],
then V [Wt ] = t for any t.
Proof: Let f (t) = V [Wt ]. Then for any times t1 and t2 , by using the property
of stationary and independent increments as well as variance condition, we
obtain
f (t1 + t2 ) = V [Wt1 +t2 ] = V [Wt1 +t2 Wt2 + Wt2 W0 ]
= V [Wt1 ] + V [Wt2 ] = f (t1 ) + f (t2 )
which implies that there is a constant such that f (t) = 2 t. It follows from
f (1) = 1 that 2 = 1 and f (t) = t. Hence the theorem is proved.

208

Chapter 4 - Uncertainty Theory

Theorem 4.43 Let Wt be a canonical process. If for any t1 and t2 , we have


p
p
p
V [Wt1 +t2 ] = V [Wt1 ] + V [Wt2 ],
then V [Wt ] = t2 for any t.
p
Proof: Let f (t) = V [Wt ]. Then for any times t1 and t2 , by using the property of stationary and independent increments as well as variance condition,
we obtain
p
p
f (t1 + t2 ) = V [Wt1 +t2 ] = V [Wt1 +t2 Wt2 + Wt2 W0 ]
p
p
= V [Wt1 ] + V [Wt2 ] = f (t1 ) + f (t2 )
which implies that there is a constant such that f (t) = t. It follows from
f (1) = 1 that = 1 and f (t) = t. The theorem is verified.
Definition 4.36 For any partition of closed interval [0, t] with 0 = t1 < t2 <
< tk+1 = t, the mesh is written as
= max |ti+1 ti |.
1ik

Let > 0 be a real number. Then the -variation of uncertain process Wt is


lim

k
X

|Wti Wti |

(4.71)

i=1

provided that the limit exists in mean square and is an uncertain process.
Especially, the -variation is called total variation if = 1; and squared
variation if = 2.
Definition 4.37 (Liu [133]) Let Wt be a canonical process. Then et + Wt
is called a derived canonical process, and the uncertain process
Gt = exp(et + Wt )

(4.72)

is called a geometric canonical process.


An Uncertain Stock Model
Assume that stock price follows geometric canonical process. Then we have
an uncertain stock model in which the bond price Xt and the stock price Yt
are determined by
(
Xt = X0 exp(rt)
(4.73)
Yt = Y0 exp(et + Wt )
where r is the riskless interest rate, e is the stock drift, is the stock diffusion,
and Wt is a canonical process.

209

Section 4.17 - Uncertain Calculus

4.17

Uncertain Calculus

Let Wt be a canonical process, and dt an infinitesimal time interval. Then


dWt = Wt+dt Wt

(4.74)

is an uncertain process with E[dWt ] = 0 and dt2 E[dWt2 ] dt.


Definition 4.38 (Liu [133]) Let Xt be an uncertain process and let Wt be
a canonical process. For any partition of closed interval [a, b] with a = t1 <
t2 < < tk+1 = b, the mesh is written as
= max |ti+1 ti |.
1ik

Then the uncertain integral of Xt with respect to Wt is


Z

k
X

Xt dWt = lim

Xti (Wti+1 Wti )

(4.75)

i=1

provided that the limit exists in mean square and is an uncertain variable.
Example 4.15: Let Wt be a canonical process. Then for any partition
0 = t1 < t2 < < tk+1 = s, we have
s

dWt = lim

k
X

(Wti+1 Wti ) Ws W0 = Ws .

i=1

Example 4.16: Let Wt be a canonical process. Then for any partition


0 = t1 < t2 < < tk+1 = s, we have
sWs =

k
X

ti+1 Wti+1 ti Wti

i=1

k
X

ti (Wti+1 Wti ) +

i=1
Z s

k
X

Wti+1 (ti+1 ti )

i=1

tdWt +
0

Wt dt
0

as 0. It follows that
Z

Z
tdWt = sWs

Wt dt.
0

210

Chapter 4 - Uncertainty Theory

Theorem 4.44 (Liu [133]) Let Wt be a canonical process, and let h(t, w) be
a twice continuously differentiable function. Define Xt = h(t, Wt ). Then we
have the following chain rule
dXt =

h
h
1 2h
(t, Wt )dWt2 .
(t, Wt )dt +
(t, Wt )dWt +
t
w
2 w2

(4.76)

Proof: Since the function h is twice continuously differentiable, by using


Taylor series expansion, the infinitesimal increment of Xt has a second-order
approximation
Xt =

h
1 2h
h
(t, Wt )(Wt )2
(t, Wt )t +
(t, Wt )Wt +
t
w
2 w2
+

1 2h
2h
2
(t,
W
)(t)
+
(t, Wt )tWt .
t
2 t2
tw

Since we can ignore the terms (t)2 and tWt , the chain rule is proved
because it makes
Z s
Z
Z s
h
1 s 2h
h
(t, Wt )dt +
(t, Wt )dWt +
(t, Wt )dWt2
Xs = X0 +
2 0 w2
0 w
0 t
for any s 0.
Remark 4.10: The infinitesimal increment dWt in (4.76) may be replaced
with the derived canonical process
dYt = ut dt + vt dWt

(4.77)

where ut is an absolutely integrable uncertain process, and vt is a square


integrable uncertain process, thus producing
dh(t, Yt ) =

h
h
1 2h
(t, Yt )dt +
(t, Yt )dYt +
(t, Yt )vt2 dWt2 .
t
w
2 w2

(4.78)

Example 4.17: Applying the chain rule, we obtain the following formula
d(tWt ) = Wt dt + tdWt .
Hence we have
s

Z
sWs =

Z
d(tWt ) =

That is,
Z

tdWt .
0

Z
tdWt = sWs

Wt dt +

Wt dt.
0

211

Section 4.18 - Uncertain Differential Equation

Theorem 4.45 (Liu [133], Integration by Parts) Suppose that Wt is a canonical process and F (t) is an absolutely continuous function. Then
Z s
Z s
Wt dF (t).
(4.79)
F (t)dWt = F (s)Ws
0

Proof: By defining h(t, Wt ) = F (t)Wt and using the chain rule, we get
d(F (t)Wt ) = Wt dF (t) + F (t)dWt .
Thus

Z
0

F (t)dWt

Wt dF (t) +

d(F (t)Wt ) =

F (s)Ws =
which is just (4.79).

4.18

Uncertain Differential Equation

Definition 4.39 (Liu [133]) Suppose Wt is a canonical process, and f and


g are some given functions. Then
dXt = f (t, Xt )dt + g(t, Xt )dWt

(4.80)

is called an uncertain differential equation. A solution is an uncertain process


Xt that satisfies (4.80) identically in t.
Remark 4.11: Note that there is no precise definition for the terms dXt , dt
and dWt in the uncertain differential equation (4.80). The mathematically
meaningful form is the uncertain integral equation
Z s
Z s
Xs = X0 +
f (t, Xt )dt +
g(t, Xt )dWt .
(4.81)
0

However, the differential form is more convenient for us. This is the main
reason why we accept the differential form.
Example 4.18: Let Wt be a canonical process. Then the uncertain differential equation
dXt = adt + bdWt
has a solution Xt = at + bWt .

Appendix A

Measurable Sets
Algebra and -algebra are very important and fundamental concepts in measure theory.
Definition A.1 Let be a nonempty set. A collection A is called an algebra
over if the following conditions hold:
(a) A;
(b) if A A, then Ac A;
(c) if Ai A for i = 1, 2, , n, then ni=1 Ai A.
If the condition (c) is replaced with closure under countable union, then A is
called a -algebra over .
Example A.1: Assume that is a nonempty set. Then {, } is the
smallest -algebra over , and the power set P (all subsets of ) is the
largest -algebra over .
Example A.2: Let A be the set of all finite disjoint unions of all intervals
of the form (, a], (a, b], (b, ) and . Then A is an algebra over <, but
not a -algebra because Ai = (0, (i 1)/i] A for all i but

Ai = (0, 1) 6 A.

i=1

Theorem A.1 A -algebra A is closed under difference, countable union,


countable intersection, upper limit, lower limit, and limit. That is,
A2 \ A1 A;

Ai A;

i=1

lim sup Ai =
i

Ai A;

(A.1)

i=1
[

\
k=1 i=k

Ai A;

(A.2)

214

Appendix A - Measurable Sets

lim inf Ai =
i

Ai A;

(A.3)

k=1 i=k

lim Ai A.

(A.4)

Theorem A.2 The intersection of any collection of -algebras is a -algebra.


Furthermore, for any nonempty class C, there is a unique minimal -algebra
containing C.
Theorem A.3 (Monotone Class Theorem) Assume that A0 is an algebra
over , and C is a monotone class of subsets of (if Ai C and Ai A
or Ai A, then A C). If A0 C and (A0 ) is the smallest -algebra
containing A0 , then (A0 ) C.

Definition A.2 Let be a nonempty set, and A a -algebra over . Then


(, A) is called a measurable space, and any elements in A are called measurable sets.
Theorem A.4 The smallest -algebra B containing all open intervals is
called the Borel algebra of <, any elements in B are called a Borel set.

Borel algebra B is a special -algebra over <. It follows from Theorem A.2
that Borel algebra is unique. It has been proved that open set, closed set,
the set of rational numbers, the set of irrational numbers, and countable set
of real numbers are all Borel sets.
Example A.3: We divide the interval [0, 1] into three equal open intervals
from which we choose the middle one, i.e., (1/3, 2/3). Then we divide each
of the remaining two intervals into three equal open intervals, and choose
the middle one in each case, i.e., (1/9, 2/9) and (7/9, 8/9). We perform this
process and obtain Dij for j = 1, 2, , 2i1 and i = 1, 2, Note that {Dij }
is a sequence of mutually disjoint open intervals. Without loss of generality,
suppose Di1 < Di2 < < Di,2i1 for i = 1, 2, Define the set
i1

D=

2[
[

Dij .

(A.5)

i=1 j=1

Then C = [0, 1] \ D is called the Cantor set. In other words, x C if and


only if x can be expressed in ternary form using only digits 0 and 2, i.e.,
x=

X
ai
i=1

3i

(A.6)

where ai = 0 or 2 for i = 1, 2, The Cantor set is closed, uncountable, and


a Borel set.
Let 1 , 2 , , n be any sets (not necessarily subsets of the same space).
The product = 1 2 n is the set of all ordered n-tuples of the
form (x1 , x2 , , xn ), where xi i for i = 1, 2, , n.

Appendix A - Measurable Sets

215

Definition A.3 Let Ai be -algebras over i , i = 1, 2, , n, respectively.


Write = 1 2 n . A measurable rectangle in is a set A =
A1 A2 An , where Ai Ai for i = 1, 2, , n. The smallest algebra containing all measurable rectangles of is called the product algebra, denoted by A = A1 A2 An .
Note that the product -algebra A is the smallest -algebra containing
measurable rectangles, rather than the product of A1 , A2 , , An .
Product -algebra may be easily extended to countably infinite case by
defining a measurable rectangle as a set of the form
A = A1 A2
where Ai Ai for all i and Ai = i for all but finitely many i. The smallest
-algebra containing all measurable rectangles of = 1 2 is called
the product -algebra, denoted by

A = A1 A2

Appendix B

Classical Measures
This appendix introduces the concepts of classical measure, measure space,
Lebesgue measure, and product measure.
Definition B.1 Let be a nonempty set, and A a -algebra over . A
classical measure is a set function on A satisfying
Axiom 1. (Nonnegativity) {A} 0 for any A A;
Axiom 2. (Countable Additivity) For every countable sequence of mutually
disjoint measurable sets {Ai }, we have
( )

[
X

Ai =
{Ai }.
(B.1)
i=1

i=1

Example B.1: Length, area and volume are instances of measure concept.
Definition B.2 Let be a nonempty set, A a -algebra over , and a
measure on A. Then the triplet (, A, ) is called a measure space.
It has been proved that there is a unique measure on the Borel algebra
of < such that {(a, b]} = b a for any interval (a, b] of <.
Definition B.3 The measure on the Borel algebra of < such that
{(a, b]} = b a,

(a, b]

is called the Lebesgue measure.


Theorem B.1 (Product Measure Theorem) Let (i , Ai , i ), i = 1, 2, , n
be measure spaces such that i {i } < for i = 1, 2, , n. Write
= 1 2 n ,

A = A1 A2 An .

217

Appendix B - Classical Measures

Then there is a unique measure on

A such that

{A1 A2 An } = 1 {A1 } 2 {A2 } n {An }

(B.2)

for every measurable rectangle A1 A2 An . The measure is called


the product of 1 , 2 , , n , denoted by = 1 2 n . The triplet
(, A, ) is called the product measure space.
Theorem B.2 (Infinite Product Measure Theorem) Assume that (i , Ai , i )
are measure spaces such that i {i } = 1 for i = 1, 2, Write

A = A1 A2
Then there is a unique measure on A such that
= 1 2

{A1 An n+1 n+2 } = 1 {A1 }2 {A2 } n {An } (B.3)


for any measurable rectangle A1 An n+1 n+2 and all
n = 1, 2, The measure is called the infinite product, denoted by =
1 2 The triplet (, A, ) is called the infinite product measure space.

Appendix C

Measurable Functions
This appendix introduces the concepts of measurable function, simple function, step function, absolutely continuous function, singular function, and
Cantor function.
Definition C.1 A function f from (, A) to the set of real numbers is said
to be measurable if
f 1 (B) = { |f () B} A
for any Borel set B of real numbers. If is a Borel set, then
assumed to be the Borel algebra over .

(C.1)

A is always

Theorem C.1 The function f is measurable from (, A) to < if and only if


f 1 (I) A for any open interval I of <.
Definition C.2 A function f : < < is said to be continuous if for any
given x < and > 0, there exists a > 0 such that |f (y) f (x)| <
whenever |y x| < .
Example C.1: Any continuous function f is measurable, because f 1 (I) is
an open set (not necessarily interval) of < for any open interval I <.
Example C.2: A monotone function f from < to < is measurable because
f 1 (I) is an interval for any interval I.
Example C.3: A function is said to be simple if it takes a finite set of
values. A function is said to be step if it takes a countably infinite set of
values. Generally speaking, a step (or simple) function is not necessarily
measurable except that it can be written as f (x) = ai if x Ai , where Ai
are measurable sets, i = 1, 2, , respectively.

219

Appendix C - Measurable Functions

Example C.4: Let f be a measurable function from (, A) to <. Then its


positive part and negative part
(
(
f (), if f () 0
f (), if f () 0
+

f () =
f () =
0,
otherwise,
0,
otherwise
are measurable functions, because



 +
f () > t = f () > t f () 0 if t < 0 ,




f () > t = f () < t f () 0 if t < 0 .
Example C.5: Let f1 and f2 be measurable functions from (, A) to <.
Then f1 f2 and f1 f2 are measurable functions, because




f1 () f2 () > t = f1 () > t f2 () > t ,




f1 () f2 () > t = f1 () > t f2 () > t .
Theorem C.2 Let {fi } be a sequence of measurable functions from (, A)
to <. Then the following functions are measurable:
sup fi ();
1i<

inf fi ();

1i<

lim sup fi ();

lim inf fi ().

(C.2)

Especially, if limi fi () exists, then it is also a measurable function.


Theorem C.3 Let f be a nonnegative measurable function from (, A) to
<. Then there exists an increasing sequence {hi } of nonnegative simple measurable functions such that
lim hi () = f (),

Furthermore, the functions hi may be defined as follows,

k 1 , if k 1 f () < k , k = 1, 2, , i2i
2i
2i
2i
hi () =

i,
if i f ()

(C.3)

(C.4)

for i = 1, 2,
Definition C.3 A function f : < < is said to be Lipschitz continuous if
there is a positive number K such that
|f (y) f (x)| K|y x|,

x, y <.

(C.5)

220

Appendix C - Measurable Functions

Definition C.4 A function f : < < is said to be absolutely continuous if


for any given > 0, there exists a small number > 0 such that
m
X

|f (yi ) f (xi )| <

(C.6)

i=1

for every finite disjoint class {(xi , yi ), i = 1, 2, , m} of bounded open intervals for which
m
X
|yi xi | < .
(C.7)
i=1

Definition C.5 A continuous and increasing function f : < < is said to


be singular if f is not a constant and its derivative f 0 = 0 almost everywhere.
Example C.6: Let C be the Cantor set. We define a function g on C as
follows,
!

X
X
ai
ai
g
=
(C.8)
i
i+1
3
2
i=1
i=1
where ai = 0 or 2 for i = 1, 2, Then g(x) is an increasing function and
g(C) = [0, 1]. The Cantor function is defined on [0, 1] as follows,



(C.9)
f (x) = sup g(y) y C, y x .
It is clear that the Cantor function f (x) is increasing such that
f (0) = 0,

f (1) = 1,

f (x) = g(x),

x C.

Moreover, f (x) is a continuous function and f 0 (x) = 0 almost everywhere.


Thus the Cantor function f is a singular function.

Appendix D

Lebesgue Integral
This appendix introduces Lebesgue integral, Lebesgue-Stieltjes integral, monotone convergence theorem, Lebesgue dominated convergence theorem, and
Fubini theorem.
Definition D.1 Let h(x) be a nonnegative simple measurable function defined by

c1 , if x A1

c2 , if x A2
h(x) =

cm , if x Am
where A1 , A2 , , Am are Borel sets. Then the Lebesgue integral of h on a
Borel set A is
Z
m
X
ci {A Ai }.
(D.1)
h(x)dx =
A

i=1

Definition D.2 Let f (x) be a nonnegative measurable function on the Borel


set A, and {hi (x)} a sequence of nonnegative simple measurable functions
such that hi (x) f (x) as i . Then the Lebesgue integral of f on A is
Z
Z
f (x)dx = lim
hi (x)dx.
(D.2)
i

Definition D.3 Let f (x) be a measurable function on the Borel set A, and
define
(
(
f
(x),
if
f
(x)
>
0
f (x), if f (x) < 0
f + (x) =
f (x) =
0,
otherwise,
0,
otherwise.
Then the Lebesgue integral of f on A is
Z
Z
Z
f (x)dx =
f + (x)dx
f (x)dx
A

(D.3)

222

Appendix D - Lebesgue Integral

provided that at least one of

R
A

f + (x)dx and

R
A

f (x)dx is finite.

Definition
D.4 Let f (x)
R
R be a measurable function on the Borel set A. If
both of A f + (x)dx and A f (x)dx are finite, then the function f is said to
be integrable on A.
Theorem D.1 (Monotone Convergence Theorem) Let {fi } be an increasing
sequence of measurable functions on A. If there is an integrable function g
such that fi (x) g(x) for all i, then we have
Z
Z
lim fi (x)dx = lim
fi (x)dx.
(D.4)
A i

Example D.1: The condition fi g cannot be removed in the monotone


convergence theorem. For example, let fi (x) = 0 if x i and 1 otherwise.
Then fi (x) 0 everywhere on A. However,
Z
Z
fi (x)dx.
lim fi (x)dx = 0 6= = lim
< i

<

Theorem D.2 (Lebesgue Dominated Convergence Theorem) Let {fi } be a


sequence of measurable functions on A whose limitation limi fi (x) exists
a.s. If there is an integrable function g such that |fi (x)| g(x) for any i,
then we have
Z
Z
lim fi (x)dx = lim
fi (x)dx.
(D.5)
A i

Example D.2: The condition |fi | g in the Lebesgue dominated convergence theorem cannot be removed. Let A = (0, 1), fi (x) = i if x (0, 1/i)
and 0 otherwise. Then fi (x) 0 everywhere on A. However,
Z
Z
lim fi (x)dx = 0 6= 1 = lim
fi (x)dx.
A i

Theorem D.3 (Fubini Theorem) Let f (x, y) be an integrable function on


<2 . Then we have
(a) f (x, y) is an integrable function of x for almost all y;
(b) f (x, y) is an integrable function of y for almost all x;


Z
Z Z
Z Z
(c)
f (x, y)dxdy =
f (x, y)dy dx =
f (x, y)dx dy.
<2

<

<

<

<

Theorem D.4 Let be an increasing and right-continuous function on <.


Then there exists a unique measure on the Borel algebra of < such that
{(a, b]} = (b) (a)

(D.6)

for all a and b with a < b. Such a measure is called the Lebesgue-Stieltjes
measure corresponding to .

223

Appendix D - Lebesgue Integral

Definition D.5 Let (x) be an increasing and right-continuous function on


<, and let h(x) be a nonnegative simple measurable function, i.e.,

c1 , if x A1

c2 , if x A2
h(x) =
..

cm , if x Am .
Then the Lebesgue-Stieltjes integral of h on the Borel set A is
Z
h(x)d(x) =
A

m
X

ci {A Ai }

(D.7)

i=1

where is the Lebesgue-Stieltjes measure corresponding to .


Definition D.6 Let f (x) be a nonnegative measurable function on the Borel
set A, and let {hi (x)} be a sequence of nonnegative simple measurable functions such that hi (x) f (x) as i . Then the Lebesgue-Stieltjes integral
of f on A is
Z
Z
f (x)d(x) = lim
hi (x)d(x).
(D.8)
i

Definition D.7 Let f (x) be a measurable function on the Borel set A, and
define
(
(
f (x), if f (x) > 0
f (x), if f (x) < 0
+

f (x) =
f (x) =
0,
otherwise,
0,
otherwise.
Then the Lebesgue-Stieltjes integral of f on A is
Z
Z
Z
f (x)d(x) =
f + (x)d(x)
f (x)d(x)
A

provided that at least one of

R
A

f + (x)d(x) and

R
A

(D.9)

f (x)d(x) is finite.

Appendix E

Euler-Lagrange Equation
Let
Z
L() =

F (x, (x), 0 (x))dx,

(E.1)

where F is a known function with continuous first and second partial derivatives. If L has an extremum (maximum or minimum) at (x), then


d
F
F

=0
(E.2)
(x) dx 0 (x)
which is called the Euler-Lagrange equation. If 0 (x) is not involved, then
the Euler-Lagrange equation reduces to
F
= 0.
(x)

(E.3)

Note that the Euler-Lagrange equation is only a necessary condition for the
existence of an extremum. However, if the existence of an extremum is clear
and there exists only one solution to the Euler-Lagrange equation, then this
solution must be the curve for which the extremum is achieved.

Appendix F

Maximum Uncertainty
Principle
An event has no uncertainty if its measure is 1 (or 0) because we may believe
that the event occurs (or not). An event is the most uncertain if its measure
is 0.5 because the event and its complement may be regarded as equally
likely.
In practice, if there is no information about the measure of an event,
we should assign 0.5 to it. Sometimes, only partial information is available.
For this case, the value of measure may be specified in some range. What
value does the measure take? For the safety purpose, we should assign it the
value as close to 0.5 as possible. This is the maximum uncertainty principle
proposed by Liu [132].
Maximum Uncertainty Principle: For any event, if there are multiple
reasonable values that a measure may take, then the value as close to 0.5 as
possible is assigned to the event.
Perhaps the reader would like to ask what values are reasonable. The
answer is problem-dependent. At least, the values should ensure that all
axioms about the measure are satisfied, and should be consistent with the
given information.
Example F.1: Let be an event. Based on some given information, the
measure value M{} is on the interval [a, b]. By using the maximum uncertainty principle, we should assign

a, if 0.5 < a b

M{} = 0.5, if a 0.5 b

b, if a b < 0.5.

Appendix G

Uncertainty Relations
Probability theory is a branch of mathematics based on the normality, nonnegativity, and countable additivity axioms. In fact, those three axioms
may be replaced with four axioms: normality, monotonicity, self-duality, and
countable additivity. Thus all of probability, credibility, chance, and uncertain measures meet the normality, monotonicity and self-duality axioms. The
essential difference among those measures is how to determine the measure
of union. For any mutually disjoint events {Ai } with supi {Ai } < 0.5, if
satisfies the countable additivity axiom, i.e.,

(
[

)
Ai

{Ai },

(G.1)

i=1

i=1

then is a probability measure; if satisfies the maximality axiom, i.e.,

(
[

)
Ai

i=1

= sup {Ai },

(G.2)

1i<

then is a credibility measure; if satisfies the countable subadditivity


axiom, i.e.,
( )

[
X

Ai
{Ai },
(G.3)
i=1

i=1

then is an uncertain measure.


Since additivity and maximality are special cases of subadditivity, probability and credibility are special cases of chance measure, and three of them
are in the category of uncertain measure. This fact also implies that random
variable and fuzzy variable are special cases of hybrid variables, and three of
them are instances of uncertain variables.

227

Appendix G - Uncertainty Relations


............................................................................................................................................................................................................................................................................................
.......
..
...
...
...........................................................................................................
...
...
.................
.........................
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
...
.
.
.............
........
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
...
.
.
..........
......................................
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
....
...
.
.
.
.
.
....................
............
........
...............
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
...
.
.
.......
.......
...
.....
.
.
.....
.
.
... ........
.
.... ....
...
...
.
.
... ...
.
... ...
...
.
.
... ....
.
. .
.
...
... ...
...
... ....
...
... ...
...
.....
......
...
....
.....
.......
......
.
.
.
.
.
.
.
.
.
.
.
...
....
.
.
.
..........
...
..................
...............................................................
.....................................................
...
...
..........
.....
...
...
............
...........
.
.
.
.
.
.
.
.
................
.
.
...
...
.
.
..........
.
.
......................
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
...
.
.
.
......................................................................................................
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
....
..
............................................................................................................................................................................................................................................................................................

Credibility
Model

Hybrid
Model

Probability
Model

Uncertainty Model

Figure G.1: Relations among Uncertainties


Some Questions
Is the real world as extreme as probability model? Is the real world as extreme
as credibility model? Is the real world as extreme as hybrid model? In many
cases, the answer is negative. Perhaps uncertainty model is more realistic.

Bibliography
[1] Alefeld G, Herzberger J, Introduction to Interval Computations, Academic
Press, New York, 1983.
[2] Atanassov KT, Intuitionistic Fuzzy Sets: Theory and Applications, PhysicaVerlag, Heidelberg, 1999.
[3] Bamber D, Goodman IR, Nguyen HT, Extension of the concept of propositional deduction from classical logic to probability: An overview of
probability-selection approaches, Information Sciences, Vol.131, 195-250,
2001.
[4] Bedford T, and Cooke MR, Probabilistic Risk Analysis, Cambridge University
Press, 2001.
[5] Bandemer H, and Nather W, Fuzzy Data Analysis, Kluwer, Dordrecht, 1992.
[6] Bellman RE, and Zadeh LA, Decision making in a fuzzy environment, Management Science, Vol.17, 141-164, 1970.
[7] Bhandari D, and Pal NR, Some new information measures of fuzzy sets,
Information Sciences, Vol.67, 209-228, 1993.
[8] Black F, and Scholes M, The pricing of option and corporate liabilities, Journal of Political Economy, Vol.81, 637-654, 1973.
[9] Bouchon-Meunier B, Mesiar R, Ralescu DA, Linear non-additive setfunctions, International Journal of General Systems, Vol.33, No.1, 89-98,
2004.
[10] Buckley JJ, Possibility and necessity in optimization, Fuzzy Sets and Systems,
Vol.25, 1-13, 1988.
[11] Buckley JJ, Stochastic versus possibilistic programming, Fuzzy Sets and Systems, Vol.34, 173-177, 1990.
[12] Cadenas JM, and Verdegay JL, Using fuzzy numbers in linear programming,
IEEE Transactions on Systems, Man and CyberneticsPart B, Vol.27, No.6,
1016-1022, 1997.
[13] Campos L, and Gonz
alez, A, A subjective approach for ranking fuzzy numbers, Fuzzy Sets and Systems, Vol.29, 145-153, 1989.
[14] Campos L, and Verdegay JL, Linear programming problems and ranking of
fuzzy numbers, Fuzzy Sets and Systems, Vol.32, 1-11, 1989.
[15] Campos FA, Villar J, and Jimenez M, Robust solutions using fuzzy chance
constraints, Engineering Optimization, Vol.38, No.6, 627-645, 2006.

230

Bibliography

[16] Carlsson C, Fuller R, and Majlender P, A possibilistic approach to selecting


portfolios with highest utility score, Fuzzy Sets and Systems, Vol.131, No.1,
13-21, 2002.
[17] Chanas S, and Kuchta D, Multiobjective programming in optimization of
interval objective functionsa generalized approach, European Journal of
Operational Research, Vol.94, 594-598, 1996.
[18] Chen A, and Ji ZW, Path finding under uncertainty, Journal of Advance
Transportation, Vol.39, No.1, 19-37, 2005.
[19] Chen SJ, and Hwang CL, Fuzzy Multiple Attribute Decision Making: Methods
and Applications, Springer-Verlag, Berlin, 1992.
[20] Chen Y, Fung RYK, Yang J, Fuzzy expected value modelling approach for
determining target values of engineering characteristics in QFD, International
Journal of Production Research, Vol.43, No.17, 3583-3604, 2005.
[21] Chen Y, Fung RYK, Tang JF, Rating technical attributes in fuzzy QFD by
integrating fuzzy weighted average method and fuzzy expected value operator,
European Journal of Operational Research, Vol.174, No.3, 1553-1566, 2006.
[22] Choquet G, Theory of capacities, Annals de lInstitute Fourier, Vol.5, 131295, 1954.
[23] Dai W, Reflection principle of Liu process, http://orsc.edu.cn/process/
071110.pdf.
[24] Das B, Maity K, Maiti A, A two warehouse supply-chain model under possibility/necessity/credibility measures, Mathematical and Computer Modelling,
Vol.46, No.3-4, 398-409, 2007.
[25] De Cooman G, Possibility theory I-III, International Journal of General Systems, Vol.25, 291-371, 1997.
[26] De Luca A, and Termini S, A definition of nonprobabilistic entropy in the
setting of fuzzy sets theory, Information and Control, Vol.20, 301-312, 1972.
[27] Dempster AP, Upper and lower probabilities induced by a multivalued mapping, Ann. Math. Stat., Vol.38, No.2, 325-339, 1967.
[28] Dubois D, and Prade H, Operations on fuzzy numbers, International Journal
of System Sciences, Vol.9, 613-626, 1978.
[29] Dubois D, and Prade H, Fuzzy Sets and Systems, Theory and Applications,
Academic Press, New York, 1980.
[30] Dubois D, and Prade H, Twofold fuzzy sets: An approach to the representation of sets with fuzzy boundaries based on possibility and necessity measures,
The Journal of Fuzzy Mathematics, Vol.3, No.4, 53-76, 1983.
[31] Dubois D, and Prade H, Fuzzy logics and generalized modus ponens revisited,
Cybernetics and Systems, Vol.15, 293-331, 1984.
[32] Dubois D, and Prade H, Fuzzy cardinality and the modeling of imprecise
quantification, Fuzzy Sets and Systems, Vol.16, 199-230, 1985.
[33] Dubois D, and Prade H, A note on measures of specificity for fuzzy sets,
International Journal of General Systems, Vol.10, 279-283, 1985.

Bibliography

231

[34] Dubois D, and Prade H, The mean value of a fuzzy number, Fuzzy Sets and
Systems, Vol.24, 279-300, 1987.
[35] Dubois D, and Prade H, Twofold fuzzy sets and rough sets some issues in
knowledge representation, Fuzzy Sets and Systems, Vol.23, 3-18, 1987.
[36] Dubois D, and Prade H, Possibility Theory: An Approach to Computerized
Processing of Uncertainty, Plenum, New York, 1988.
[37] Dubois D, and Prade H, Rough fuzzy sets and fuzzy rough sets, International
Journal of General Systems, Vol.17, 191-200, 1990.
[38] Dunyak J, Saad IW, and Wunsch D, A theory of independent fuzzy probability for system reliability, IEEE Transactions on Fuzzy Systems, Vol.7, No.3,
286-294, 1999.
[39] Esogbue AO, and Liu B, Reservoir operations optimization via fuzzy criterion
decision processes, Fuzzy Optimization and Decision Making, Vol.5, No.3,
289-305, 2006.
[40] Feng X, and Liu YK, Measurability criteria for fuzzy random vectors, Fuzzy
Optimization and Decision Making, Vol.5, No.3, 245-253, 2006.
[41] Feng Y, Yang LX, A two-objective fuzzy k-cardinality assignment problem,
Journal of Computational and Applied Mathematics, Vol.197, No.1, 233-244,
2006.
[42] Fung RYK, Chen YZ, Chen L, A fuzzy expected value-based goal programing
model for product planning using quality function deployment, Engineering
Optimization, Vol.37, No.6, 633-647, 2005.
[43] Gao J, and Liu B, New primitive chance measures of fuzzy random event,
International Journal of Fuzzy Systems, Vol.3, No.4, 527-531, 2001.
[44] Gao J, Liu B, and Gen M, A hybrid intelligent algorithm for stochastic multilevel programming, IEEJ Transactions on Electronics, Information and Systems, Vol.124-C, No.10, 1991-1998, 2004.
[45] Gao J, and Liu B, Fuzzy multilevel programming with a hybrid intelligent
algorithm, Computers & Mathematics with Applications, Vol.49, 1539-1548,
2005.
[46] Gao J, and Lu M, Fuzzy quadratic minimum spanning tree problem, Applied
Mathematics and Computation, Vol.164, No.3, 773-788, 2005.
[47] Gao J, and Feng X, A hybrid intelligent algorithm for fuzzy dynamic inventory
problem, Journal of Information and Computing Science, Vol.1, No.4, 235244, 2006.
[48] Gao J, Credibilistic game with fuzzy information, Journal of Uncertain Systems, Vol.1, No.1, 74-80, 2007.
[49] Gao J, and Zhou J, Uncertain Process Online, http://orsc.edu.cn/process.
[50] Gao J, Credibilistic option pricing: a new model, http://orsc.edu.cn/process/
071124.pdf.
[51] Gao X, Option pricing formula for hybrid stock model with randomness and
fuzziness, http://orsc.edu.cn/process/080112.pdf.

232

Bibliography

[52] Gil MA, Lopez-Diaz M, Ralescu DA, Overview on the development of fuzzy
random variables, Fuzzy Sets and Systems, Vol.157, No.19, 2546-2557, 2006.
[53] Gonz
alez, A, A study of the ranking function approach through mean values,
Fuzzy Sets and Systems, Vol.35, 29-41, 1990.
[54] Guan J, and Bell DA, Evidence Theory and its Applications, North-Holland,
Amsterdam, 1991.
[55] Guo R, Zhao R, Guo D, and Dunne T, Random fuzzy variable modeling on
repairable system, Journal of Uncertain Systems, Vol.1, No.3, 222-234, 2007.
[56] Guo R, Guo D, Thiart C, and Li X, Bivariate credibility-copulas, Journal of
Uncertain Systems, Vol.1, No.4, 303-314, 2007.
[57] Hansen E, Global Optimization Using Interval Analysis, Marcel Dekker, New
York, 1992.
[58] He Y, and Xu J, A class of random fuzzy programming model and its application to vehicle routing problem, World Journal of Modelling and Simulation,
Vol.1, No.1, 3-11, 2005.
[59] Heilpern S, The expected value of a fuzzy number, Fuzzy Sets and Systems,
Vol.47, 81-86, 1992.
[60] Higashi M, and Klir GJ, On measures of fuzziness and fuzzy complements,
International Journal of General Systems, Vol.8, 169-180, 1982.
[61] Higashi M, and Klir GJ, Measures of uncertainty and information based on
possibility distributions, International Journal of General Systems, Vol.9, 4358, 1983.
[62] Hisdal E, Conditional possibilities independence and noninteraction, Fuzzy
Sets and Systems, Vol.1, 283-297, 1978.
[63] Hisdal E, Logical Structures for Representation of Knowledge and Uncertainty, Physica-Verlag, Heidelberg, 1998.
[64] Hong DH, Renewal process with T-related fuzzy inter-arrival times and fuzzy
rewards, Information Sciences, Vol.176, No.16, 2386-2395, 2006.
[65] Inuiguchi M, and Ramk J, Possibilistic linear programming: A brief review
of fuzzy mathematical programming and a comparison with stochastic programming in portfolio selection problem, Fuzzy Sets and Systems, Vol.111,
No.1, 3-28, 2000.
[66] Ishibuchi H, and Tanaka H, Multiobjective programming in optimization of
the interval objective function, European Journal of Operational Research,
Vol.48, 219-225, 1990.
[67] Jaynes ET, Information theory and statistical mechanics, Physical Reviews,
Vol.106, No.4, 620-630, 1957.
[68] Ji XY, and Shao Z, Model and algorithm for bilevel Newsboy problem
with fuzzy demands and discounts, Applied Mathematics and Computation,
Vol.172, No.1, 163-174, 2006.
[69] Ji XY, and Iwamura K, New models for shortest path problem with fuzzy arc
lengths, Applied Mathematical Modelling, Vol.31, 259-269, 2007.

Bibliography

233

[70] John RI, Type 2 fuzzy sets: An appraisal of theory and applications, International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.6,
No.6, 563-576, 1998.
[71] Kacprzyk J, and Esogbue AO, Fuzzy dynamic programming: Main developments and applications, Fuzzy Sets and Systems, Vol.81, 31-45, 1996.
[72] Kacprzyk J, Multistage Fuzzy Control, Wiley, Chichester, 1997.
[73] Karnik NN, Mendel JM, and Liang Q, Type-2 fuzzy logic systems, IEEE
Transactions on Fuzzy Systems, Vol.7, No.6, 643-658, 1999.
[74] Karnik NN, Mendel JM, and Liang Q, Centroid of a type-2 fuzzy set, Information Sciences, Vol.132, 195-220, 2001.
[75] Kaufmann A, Introduction to the Theory of Fuzzy Subsets, Vol.I, Academic
Press, New York, 1975.
[76] Kaufmann A, and Gupta MM, Introduction to Fuzzy Arithmetic: Theory and
Applications, Van Nostrand Reinhold, New York, 1985.
[77] Kaufmann A, and Gupta MM, Fuzzy Mathematical Models in Engineering
and Management Science, 2nd ed, North-Holland, Amsterdam, 1991.
[78] Ke H, and Liu B, Project scheduling problem with stochastic activity duration
times, Applied Mathematics and Computation, Vol.168, No.1, 342-353, 2005.
[79] Ke H, and Liu B, Project scheduling problem with mixed uncertainty of randomness and fuzziness, European Journal of Operational Research, Vol.183,
No.1, 135-147, 2007.
[80] Ke H, and Liu B, Fuzzy project scheduling problem and its hybrid intelligent
algorithm, Technical Report, 2005.
[81] Klement EP, Puri ML, and Ralescu DA, Limit theorems for fuzzy random
variables, Proceedings of the Royal Society of London Series A, Vol.407, 171182, 1986.
[82] Klir GJ, and Folger TA, Fuzzy Sets, Uncertainty, and Information, PrenticeHall, Englewood Cliffs, NJ, 1980.
[83] Klir GJ, and Yuan B, Fuzzy Sets and Fuzzy Logic: Theory and Applications,
Prentice-Hall, New Jersey, 1995.
[84] Knopfmacher J, On measures of fuzziness, Journal of Mathematical Analysis
and Applications, Vol.49, 529-534, 1975.
[85] Kosko B, Fuzzy entropy and conditioning, Information Sciences, Vol.40, 165174, 1986.
[86] Kruse R, and Meyer KD, Statistics with Vague Data, D. Reidel Publishing
Company, Dordrecht, 1987.
[87] Kwakernaak H, Fuzzy random variablesI: Definitions and theorems, Information Sciences, Vol.15, 1-29, 1978.
[88] Kwakernaak H, Fuzzy random variablesII: Algorithms and examples for the
discrete case, Information Sciences, Vol.17, 253-278, 1979.
[89] Lai YJ, and Hwang CL, Fuzzy Multiple Objective Decision Making: Methods
and Applications, Springer-Verlag, New York, 1994.

234

Bibliography

[90] Lee ES, Fuzzy multiple level programming, Applied Mathematics and Computation, Vol.120, 79-90, 2001.
[91] Lee KH, First Course on Fuzzy Theory and Applications, Springer-Verlag,
Berlin, 2005.
[92] Lertworasirkul S, Fang SC, Joines JA, and Nuttle HLW, Fuzzy data envelopment analysis (DEA): a possibility approach, Fuzzy Sets and Systems,
Vol.139, No.2, 379-394, 2003.
[93] Li HL, and Yu CS, A fuzzy multiobjective program with quasiconcave membership functions and fuzzy coefficients, Fuzzy Sets and Systems, Vol.109,
No.1, 59-81, 2000.
[94] Li J, Xu J, and Gen M, A class of multiobjective linear programming
model with fuzzy random coefficients, Mathematical and Computer Modelling,
Vol.44, Nos.11-12, 1097-1113, 2006.
[95] Li P, and Liu B, Entropy of credibility distributions for fuzzy variables, IEEE
Transactions on Fuzzy Systems, Vol.16, No.1, 123-129, 2008.
[96] Li SM, Ogura Y, and Nguyen HT, Gaussian processes and martingales for
fuzzy valued random variables with continuous parameter, Information Sciences, Vol.133, 7-21, 2001.
[97] Li SM, Ogura Y, and Kreinovich V, Limit Theorems and Applications of
Set-Valued and Fuzzy Set-Valued Random Variables, Kluwer, Boston, 2002.
[98] Li SQ, Zhao RQ, and Tang WS, Fuzzy random homogeneous Poisson process and compound Poisson process, Journal of Information and Computing
Science, Vol.1, No.4, 207-224, 2006.
[99] Li X, and Liu B, The independence of fuzzy variables with applications,
International Journal of Natural Sciences & Technology, Vol.1, No.1, 95-100,
2006.
[100] Li X, and Liu B, A sufficient and necessary condition for credibility measures,
International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,
Vol.14, No.5, 527-535, 2006.
[101] Li X, and Liu B, New independence definition of fuzzy random variable and
random fuzzy variable, World Journal of Modelling and Simulation, Vol.2,
No.5, 338-342, 2006.
[102] Li X, and Liu B, Maximum entropy principle for fuzzy variables, International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.15,
Supp.2, 43-52, 2007.
[103] Li X, and Liu B, Chance measure for hybrid events with fuzziness and randomness, Soft Computing, to be published.
[104] Li X, and Liu B, Moment estimation theorems for various types of uncertain
variable, Technical Report, 2007.
[105] Li X, and Liu B, On distance between fuzzy variables, Technical Report,
2007.
[106] Li X, and Liu B, Conditional chance measure for hybrid events, Technical
Report, 2007.

Bibliography

235

[107] Li X, and Liu B, Cross-entropy and generalized entropy for fuzzy variables,
Technical Report, 2007.
[108] Li X, Expected value and variance of geometric Liu process, http://orsc.
edu.cn/process/071123.pdf.
[109] Liu B, Dependent-chance goal programming and its genetic algorithm based
approach, Mathematical and Computer Modelling, Vol.24, No.7, 43-52, 1996.
[110] Liu B, and Esogbue AO, Fuzzy criterion set and fuzzy criterion dynamic
programming, Journal of Mathematical Analysis and Applications, Vol.199,
No.1, 293-311, 1996.
[111] Liu B, Dependent-chance programming: A class of stochastic optimization,
Computers & Mathematics with Applications, Vol.34, No.12, 89-104, 1997.
[112] Liu B, and Iwamura K, Modelling stochastic decision systems using
dependent-chance programming, European Journal of Operational Research,
Vol.101, No.1, 193-203, 1997.
[113] Liu B, and Iwamura K, Chance constrained programming with fuzzy parameters, Fuzzy Sets and Systems, Vol.94, No.2, 227-237, 1998.
[114] Liu B, and Iwamura K, A note on chance constrained programming with
fuzzy coefficients, Fuzzy Sets and Systems, Vol.100, Nos.1-3, 229-233, 1998.
[115] Liu B, Minimax chance constrained programming models for fuzzy decision
systems, Information Sciences, Vol.112, Nos.1-4, 25-38, 1998.
[116] Liu B, Dependent-chance programming with fuzzy decisions, IEEE Transactions on Fuzzy Systems, Vol.7, No.3, 354-360, 1999.
[117] Liu B, and Esogbue AO, Decision Criteria and Optimal Inventory Processes,
Kluwer, Boston, 1999.
[118] Liu B, Uncertain Programming, Wiley, New York, 1999.
[119] Liu B, Dependent-chance programming in fuzzy environments, Fuzzy Sets
and Systems, Vol.109, No.1, 97-106, 2000.
[120] Liu B, Uncertain programming: A unifying optimization theory in various uncertain environments, Applied Mathematics and Computation, Vol.120, Nos.13, 227-234, 2001.
[121] Liu B, and Iwamura K, Fuzzy programming with fuzzy decisions and fuzzy
simulation-based genetic algorithm, Fuzzy Sets and Systems, Vol.122, No.2,
253-262, 2001.
[122] Liu B, Fuzzy random chance-constrained programming, IEEE Transactions
on Fuzzy Systems, Vol.9, No.5, 713-720, 2001.
[123] Liu B, Fuzzy random dependent-chance programming, IEEE Transactions on
Fuzzy Systems, Vol.9, No.5, 721-726, 2001.
[124] Liu B, Theory and Practice of Uncertain Programming, Physica-Verlag, Heidelberg, 2002.
[125] Liu B, Toward fuzzy optimization without mathematical ambiguity, Fuzzy
Optimization and Decision Making, Vol.1, No.1, 43-63, 2002.
[126] Liu B, and Liu YK, Expected value of fuzzy variable and fuzzy expected value
models, IEEE Transactions on Fuzzy Systems, Vol.10, No.4, 445-450, 2002.

236

Bibliography

[127] Liu B, Random fuzzy dependent-chance programming and its hybrid intelligent algorithm, Information Sciences, Vol.141, Nos.3-4, 259-271, 2002.
[128] Liu B, Inequalities and convergence concepts of fuzzy and rough variables,
Fuzzy Optimization and Decision Making, Vol.2, No.2, 87-100, 2003.
[129] Liu B, Uncertainty Theory, Springer-Verlag, Berlin, 2004.
[130] Liu B, A survey of credibility theory, Fuzzy Optimization and Decision Making, Vol.5, No.4, 387-408, 2006.
[131] Liu B, A survey of entropy of fuzzy variables, Journal of Uncertain Systems,
Vol.1, No.1, 4-13, 2007.
[132] Liu B, Uncertainty Theory, 2nd ed., Springer-Verlag, Berlin, 2007.
[133] Liu B, Fuzzy process, hybrid process and uncertain process, Journal of Uncertain Systems, Vol.2, No.1, 3-16, 2008.
[134] Liu B, Theory and Practice of Uncertain Programming, 2nd ed., http://orsc.
edu.cn/liu/up.pdf.
[135] Liu LZ, Li YZ, The fuzzy quadratic assignment problem with penalty:
New models and genetic algorithm, Applied Mathematics and Computation,
Vol.174, No.2, 1229-1244, 2006.
[136] Liu XC, Entropy, distance measure and similarity measure of fuzzy sets and
their relations, Fuzzy Sets and Systems, Vol.52, 305-318, 1992.
[137] Liu XW, Measuring the satisfaction of constraints in fuzzy linear programming, Fuzzy Sets and Systems, Vol.122, No.2, 263-275, 2001.
[138] Liu YK, and Liu B, Random fuzzy programming with chance measures
defined by fuzzy integrals, Mathematical and Computer Modelling, Vol.36,
Nos.4-5, 509-524, 2002.
[139] Liu YK, and Liu B, Fuzzy random programming problems with multiple
criteria, Asian Information-Science-Life, Vol.1, No.3, 249-256, 2002.
[140] Liu YK, and Liu B, Fuzzy random variables: A scalar expected value operator, Fuzzy Optimization and Decision Making, Vol.2, No.2, 143-160, 2003.
[141] Liu YK, and Liu B, Expected value operator of random fuzzy variable and
random fuzzy expected value models, International Journal of Uncertainty,
Fuzziness & Knowledge-Based Systems, Vol.11, No.2, 195-215, 2003.
[142] Liu YK, and Liu B, A class of fuzzy random optimization: Expected value
models, Information Sciences, Vol.155, Nos.1-2, 89-102, 2003.
[143] Liu YK, and Liu B, On minimum-risk problems in fuzzy random decision
systems, Computers & Operations Research, Vol.32, No.2, 257-283, 2005.
[144] Liu YK, and Liu B, Fuzzy random programming with equilibrium chance
constraints, Information Sciences, Vol.170, 363-395, 2005.
[145] Liu YK, Fuzzy programming with recourse, International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.13, No.4, 381-413, 2005.
[146] Liu YK, Convergent results about the use of fuzzy simulation in fuzzy optimization problems, IEEE Transactions on Fuzzy Systems, Vol.14, No.2, 295304, 2006.

Bibliography

237

[147] Liu YK, and Wang SM, On the properties of credibility critical value functions, Journal of Information and Computing Science, Vol.1, No.4, 195-206,
2006.
[148] Liu YK, and Gao J, The independence of fuzzy variables in credibility theory and its applications, International Journal of Uncertainty, Fuzziness &
Knowledge-Based Systems, Vol.15, Supp.2, 1-20, 2007.
[149] Loo SG, Measures of fuzziness, Cybernetica, Vol.20, 201-210, 1977.
[150] Lopez-Diaz M, Ralescu DA, Tools for fuzzy random variables: Embeddings
and measurabilities, Computational Statistics & Data Analysis, Vol.51, No.1,
109-114, 2006.
[151] Lu M, On crisp equivalents and solutions of fuzzy programming with different
chance measures, Information: An International Journal, Vol.6, No.2, 125133, 2003.
[152] Lucas C, and Araabi BN, Generalization of the Dempster-Shafer Theory:
A fuzzy-valued measure, IEEE Transactions on Fuzzy Systems, Vol.7, No.3,
255-270, 1999.
[153] Luhandjula MK, Fuzziness and randomness in an optimization framework,
Fuzzy Sets and Systems, Vol.77, 291-297, 1996.
[154] Luhandjula MK, and Gupta MM, On fuzzy stochastic optimization, Fuzzy
Sets and Systems, Vol.81, 47-55, 1996.
[155] Luhandjula MK, Optimisation under hybrid uncertainty, Fuzzy Sets and Systems, Vol.146, No.2, 187-203, 2004.
[156] Luhandjula MK, Fuzzy stochastic linear programming: Survey and future
research directions, European Journal of Operational Research, Vol.174, No.3,
1353-1367, 2006.
[157] Maiti MK, Maiti MA, Fuzzy inventory model with two warehouses under
possibility constraints, Fuzzy Sets and Systems, Vol.157, No.1, 52-73, 2006.
[158] Maleki HR, Tata M, and Mashinchi M, Linear programming with fuzzy variables, Fuzzy Sets and Systems, Vol.109, No.1, 21-33, 2000.
[159] Merton RC, Theory of rational option pricing, Bell Journal of Economics and
Management Science, Vol.4, 141-183, 1973.
[160] Mizumoto M, and Tanaka K, Some properties of fuzzy sets of type 2, Information and Control, Vol.31, 312-340, 1976.
[161] Mohammed W, Chance constrained fuzzy goal programming with right-hand
side uniform random variable coefficients, Fuzzy Sets and Systems, Vol.109,
No.1, 107-110, 2000.
[162] Molchanov IS, Limit Theorems for Unions of Random Closed Sets, SpringerVerlag, Berlin, 1993.
[163] Nahmias S, Fuzzy variables, Fuzzy Sets and Systems, Vol.1, 97-110, 1978.
[164] Negoita CV, and Ralescu D, On fuzzy optimization, Kybernetes, Vol.6, 193195, 1977.
[165] Negoita CV, and Ralescu D, Simulation, Knowledge-based Computing, and
Fuzzy Statistics, Van Nostrand Reinhold, New York, 1987.

238

Bibliography

[166] Neumaier A, Interval Methods for Systems of Equations, Cambridge University Press, New York, 1990.
[167] Nguyen HT, On conditional possibility distributions, Fuzzy Sets and Systems,
Vol.1, 299-309, 1978.
[168] Nguyen HT, Fuzzy sets and probability, Fuzzy sets and Systems, Vol.90, 129132, 1997.
[169] Nguyen HT, Kreinovich V, Zuo Q, Interval-valued degrees of belief: Applications of interval computations to expert systems and intelligent control,
International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,
Vol.5, 317-358, 1997.
[170] Nguyen HT, Nguyen NT, Wang TH, On capacity functionals in interval probabilities, International Journal of Uncertainty, Fuzziness & Knowledge-Based
Systems, Vol.5, 359-377, 1997.
[171] Nguyen HT, Kreinovich V, Shekhter V, On the possibility of using complex
values in fuzzy logic for representing inconsistencies, International Journal of
Intelligent Systems, Vol.13, 683-714, 1998.
[172] Nguyen HT, Kreinovich V, Wu BL, Fuzzy/probability similar to fractal/smooth, International Journal of Uncertainty, Fuzziness & KnowledgeBased Systems, Vol.7, 363-370, 1999.
[173] Nguyen HT, Nguyen NT, On Chu spaces in uncertainty analysis, International Journal of Intelligent Systems, Vol.15, 425-440, 2000.
[174] Nguyen HT, Some mathematical structures for computational information,
Information Sciences, Vol.128, 67-89, 2000.
[175] Nguyen VH, Fuzzy stochastic goal programming problems, European Journal
of Operational Research, Vol.176, No.1, 77-86, 2007.
[176] ksendal B, Stochastic Differential Equations, 6th ed., Springer-Verlag,
Berlin, 2005.
[177] Pal NR, and Pal SK, Object background segmentation using a new definition
of entropy, IEE Proc. E, Vol.136, 284-295, 1989.
[178] Pal NR, and Pal SK, Higher order fuzzy entropy and hybrid entropy of a set,
Information Sciences, Vol.61, 211-231, 1992.
[179] Pal NR, Bezdek JC, and Hemasinha R, Uncertainty measures for evidential reasonning I: a review, International Journal of Approximate Reasoning,
Vol.7, 165-183, 1992.
[180] Pal NR, Bezdek JC, and Hemasinha R, Uncertainty measures for evidential
reasonning II: a new measure, International Journal of Approximate Reasoning, Vol.8, 1-16, 1993.
[181] Pal NR, and Bezdek JC, Measuring fuzzy uncertainty, IEEE Transactions on
Fuzzy Systems, Vol.2, 107-118, 1994.
[182] Pawlak Z, Rough sets, International Journal of Information and Computer
Sciences, Vol.11, No.5, 341-356, 1982.
[183] Pawlak Z, Rough sets and fuzzy sets, Fuzzy Sets and Systems, Vol.17, 99-102,
1985.

Bibliography

239

[184] Pawlak Z, Rough Sets: Theoretical Aspects of Reasoning about Data, Kluwer,
Dordrecht, 1991.
[185] Pedrycz W, Optimization schemes for decomposition of fuzzy relations, Fuzzy
Sets and Systems, Vol.100, 301-325, 1998.
[186] Peng J, and Liu B, Stochastic goal programming models for parallel machine
scheduling problems, Asian Information-Science-Life, Vol.1, No.3, 257-266,
2002.
[187] Peng J, and Liu B, Parallel machine scheduling models with fuzzy processing
times, Information Sciences, Vol.166, Nos.1-4, 49-66, 2004.
[188] Peng J, and Liu B, A framework of birandom theory and optimization methods, Information: An International Journal, Vol.9, No.4, 629-640, 2006.
[189] Peng J, and Zhao XD, Some theoretical aspects of the critical values of birandom variable, Journal of Information and Computing Science, Vol.1, No.4,
225-234, 2006.
[190] Peng J, and Liu B, Birandom variables and birandom programming, Computers & Industrial Engineering, Vol.53, No.3, 433-453, 2007.
[191] Peng J, Credibilistic stopping problems for fuzzy stock market, http://orsc.
edu.cn/process/071125.pdf.
[192] Puri ML, and Ralescu D, Differentials of fuzzy functions, Journal of Mathematical Analysis and Applications, Vol.91, 552-558, 1983.
[193] Puri ML, and Ralescu D, Fuzzy random variables, Journal of Mathematical
Analysis and Applications, Vol.114, 409-422, 1986.
[194] Qin ZF, and Li X, Option pricing formula for fuzzy financial market, Journal
of Uncertain Systems, Vol.2, No.1, 17-21, 2008.
[195] Qin ZF, and Liu B, On some special hybrid variables, Technical Report, 2007.
[196] Qin ZF, On analytic functions of complex Liu process, http://orsc.edu.cn/
process/071026.pdf.
[197] Raj PA, and Kumer DN, Ranking alternatives with fuzzy weights using maximizing set and minimizing set, Fuzzy Sets and Systems, Vol.105, 365-375,
1999.
[198] Ralescu DA, Sugeno M, Fuzzy integral representation, Fuzzy Sets and Systems, Vol.84, No.2, 127-133, 1996.
[199] Ralescu AL, Ralescu DA, Extensions of fuzzy aggregation, Fuzzy Sets and
systems, Vol.86, No.3, 321-330, 1997.
[200] Ramer A, Conditional possibility measures, International Journal of Cybernetics and Systems, Vol.20, 233-247, 1989.
[201] Ramk J, Extension principle in fuzzy optimization, Fuzzy Sets and Systems,
Vol.19, 29-35, 1986.
[202] Ramk J, and Rommelfanger H, Fuzzy mathematical programming based on
some inequality relations, Fuzzy Sets and Systems, Vol.81, 77-88, 1996.
[203] Saade JJ, Maximization of a function over a fuzzy domain, Fuzzy Sets and
Systems, Vol.62, 55-70, 1994.

240

Bibliography

[204] Sakawa M, Nishizaki I, and Uemura Y, Interactive fuzzy programming for


multi-level linear programming problems with fuzzy parameters, Fuzzy Sets
and Systems, Vol.109, No.1, 3-19, 2000.
[205] Sakawa M, Nishizaki I, Uemura Y, Interactive fuzzy programming for twolevel linear fractional programming problems with fuzzy parameters, Fuzzy
Sets and Systems, Vol.115, 93-103, 2000.
[206] Shafer G, A Mathematical Theory of Evidence, Princeton University Press,
Princeton, NJ, 1976.
[207] Shannon CE, The Mathematical Theory of Communication, The University
of Illinois Press, Urbana, 1949.
[208] Shao Z, and Ji XY, Fuzzy multi-product constraint newsboy problem, Applied
Mathematics and Computation, Vol.180, No.1, 7-15, 2006.
[209] Shih HS, Lai YJ, Lee ES, Fuzzy approach for multilevel programming problems, Computers and Operations Research, Vol.23, 73-91, 1996.
[210] Shreve SE, Stochastic Calculus for Finance II: Continuous-Time Models,
Springer, Berlin, 2004.
[211] Slowinski R, and Teghem J, Fuzzy versus stochastic approaches to multicriteria linear programming under uncertainty, Naval Research Logistics, Vol.35,
673-695, 1988.
[212] Slowinski R, and Vanderpooten D, A generalized definition of rough approximations based on similarity, IEEE Transactions on Knowledge and Data
Engineering, Vol.12, No.2, 331-336, 2000.
[213] Steuer RE, Algorithm for linear programming problems with interval objective function coefficients, Mathematics of Operational Research, Vol.6, 333348, 1981.
[214] Sugeno M, Theory of Fuzzy Integrals and its Applications, Ph.D. Dissertation,
Tokyo Institute of Technology, 1974.
[215] Szmidt E, Kacprzyk J, Distances between intuitionistic fuzzy sets, Fuzzy Sets
and Systems, Vol.114, 505-518, 2000.
[216] Szmidt E, Kacprzyk J, Entropy for intuitionistic fuzzy sets, Fuzzy Sets and
Systems, Vol.118, 467-477, 2001.
[217] Tanaka H, and Asai K, Fuzzy linear programming problems with fuzzy numbers, Fuzzy Sets and Systems, Vol.13, 1-10, 1984.
[218] Tanaka H, and Asai K, Fuzzy solutions in fuzzy linear programming problems,
IEEE Transactions on Systems, Man and Cybernetics, Vol.14, 325-328, 1984.
[219] Tanaka H, and Guo P, Possibilistic Data Analysis for Operations Research,
Physica-Verlag, Heidelberg, 1999.
[220] Torabi H, Davvaz B, Behboodian J, Fuzzy random events in incomplete probability models, Journal of Intelligent & Fuzzy Systems, Vol.17, No.2, 183-188,
2006.
[221] Wang G, and Liu B, New theorems for fuzzy sequence convergence, Proceedings of the Second International Conference on Information and Management
Sciences, Chengdu, China, August 24-30, 2003, pp.100-105.

Bibliography

241

[222] Wang Z, and Klir GJ, Fuzzy Measure Theory, Plenum Press, New York, 1992.
[223] Yager RR, On measures of fuzziness and negation, Part I: Membership in the
unit interval, International Journal of General Systems, Vol.5, 221-229, 1979.
[224] Yager RR, On measures of fuzziness and negation, Part II: Lattices, Information and Control, Vol.44, 236-260, 1980.
[225] Yager RR, A procedure for ordering fuzzy subsets of the unit interval, Information Sciences, Vol.24, 143-161, 1981.
[226] Yager RR, Generalized probabilities of fuzzy events from fuzzy belief structures, Information Sciences, Vol.28, 45-62, 1982.
[227] Yager RR, Measuring tranquility and anxiety in decision making: an application of fuzzy sets, International Journal of General Systems, Vol.8, 139-144,
1982.
[228] Yager RR, Entropy and specificity in a mathematical theory of evidence,
International Journal of General Systems, Vol.9, 249-260, 1983.
[229] Yager RR, On ordered weighted averaging aggregation operators in multicriteria decision making, IEEE Transactions on Systems, Man and Cybernetics,
Vol.18, 183-190, 1988.
[230] Yager RR, Decision making under Dempster-Shafer uncertainties, International Journal of General Systems, Vol.20, 233-245, 1992.
[231] Yager RR, On the specificity of a possibility distribution, Fuzzy Sets and
Systems, Vol.50, 279-292, 1992.
[232] Yager RR, Measures of entropy and fuzziness related to aggregation operators,
Information Sciences, Vol.82, 147-166, 1995.
[233] Yager RR, Modeling uncertainty using partial information, Information Sciences, Vol.121, 271-294, 1999.
[234] Yager RR, Decision making with fuzzy probability assessments, IEEE Transactions on Fuzzy Systems, Vol.7, 462-466, 1999.
[235] Yager RR, On the entropy of fuzzy measures, IEEE Transactions on Fuzzy
Systems, Vol.8, 453-461, 2000.
[236] Yager RR, On the evaluation of uncertain courses of action, Fuzzy Optimization and Decision Making, Vol.1, 13-41, 2002.
[237] Yang L, and Liu B, On inequalities and critical values of fuzzy random variable, International Journal of Uncertainty, Fuzziness & Knowledge-Based
Systems, Vol.13, No.2, 163-175, 2005.
[238] Yang L, and Liu B, A sufficient and necessary condition for chance distribution of birandom variable, Information: An International Journal, Vol.9,
No.1, 33-36, 2006.
[239] Yang L, and Liu B, On continuity theorem for characteristic function of fuzzy
variable, Journal of Intelligent and Fuzzy Systems, Vol.17, No.3, 325-332,
2006.
[240] Yang L, and Liu B, Chance distribution of fuzzy random variable and laws
of large numbers, Technical Report, 2004.

242

Bibliography

[241] Yang N, Wen FS, A chance constrained programming approach to transmission system expansion planning, Electric Power Systems Research, Vol.75,
Nos.2-3, 171-177, 2005.
[242] Yazenin AV, On the problem of possibilistic optimization, Fuzzy Sets and
Systems, Vol.81, 133-140, 1996.
[243] You C, Multidimensional Liu process, differential and integral, Proceedings
of the First Intelligent Computing Conference, Lushan, October 10-13, 2007,
pp.153-158. (Also available at http://orsc.edu.cn/process/071015.pdf)
[244] You C, Some extensions of Wiener-Liu process and Ito-Liu integral, http://
orsc.edu.cn/process/071019.pdf.
[245] Zadeh LA, Fuzzy sets, Information and Control, Vol.8, 338-353, 1965.
[246] Zadeh LA, Outline of a new approach to the analysis of complex systems and
decision processes, IEEE Transactions on Systems, Man and Cybernetics,
Vol.3, 28-44, 1973.
[247] Zadeh LA, The concept of a linguistic variable and its application to approximate reasoning, Information Sciences, Vol.8, 199-251, 1975.
[248] Zadeh LA, Fuzzy sets as a basis for a theory of possibility, Fuzzy Sets and
Systems, Vol.1, 3-28, 1978.
[249] Zadeh LA, A theory of approximate reasoning, In: J Hayes, D Michie and
RM Thrall, eds, Mathematical Frontiers of the Social and Policy Sciences,
Westview Press, Boulder, Cororado, 69-129, 1979.
[250] Zhao R and Liu B, Stochastic programming models for general redundancy
optimization problems, IEEE Transactions on Reliability, Vol.52, No.2, 181191, 2003.
[251] Zhao R, and Liu B, Renewal process with fuzzy interarrival times and rewards,
International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,
Vol.11, No.5, 573-586, 2003.
[252] Zhao R, and Liu B, Redundancy optimization problems with uncertainty
of combining randomness and fuzziness, European Journal of Operational
Research, Vol.157, No.3, 716-735, 2004.
[253] Zhao R, and Liu B, Standby redundancy optimization problems with fuzzy
lifetimes, Computers & Industrial Engineering, Vol.49, No.2, 318-338, 2005.
[254] Zhao R, Tang WS, and Yun HL, Random fuzzy renewal process, European
Journal of Operational Research, Vol.169, No.1, 189-201, 2006.
[255] Zhao R, and Tang WS, Some properties of fuzzy random renewal process,
IEEE Transactions on Fuzzy Systems, Vol.14, No.2, 173-179, 2006.
[256] Zheng Y, and Liu B, Fuzzy vehicle routing model with credibility measure
and its hybrid intelligent algorithm, Applied Mathematics and Computation,
Vol.176, No.2, 673-683, 2006.
[257] Zhou J, and Liu B, New stochastic models for capacitated location-allocation
problem, Computers & Industrial Engineering, Vol.45, No.1, 111-125, 2003.
[258] Zhou J, and Liu B, Analysis and algorithms of bifuzzy systems, International
Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.12, No.3,
357-376, 2004.

Bibliography

243

[259] Zhou J, and Liu B, Convergence concepts of bifuzzy sequence, Asian


Information-Science-Life, Vol.2, No.3, 297-310, 2004.
[260] Zhou J, and Liu B, Modeling capacitated location-allocation problem with
fuzzy demands, Computers & Industrial Engineering, Vol.53, No.3, 454-468,
2007.
[261] Zhu Y, and Liu B, Continuity theorems and chance distribution of random
fuzzy variable, Proceedings of the Royal Society of London Series A, Vol.460,
2505-2519, 2004.
[262] Zhu Y, and Liu B, Some inequalities of random fuzzy variables with application to moment convergence, Computers & Mathematics with Applications,
Vol.50, Nos.5-6, 719-727, 2005.
[263] Zhu Y, and Ji XY, Expected values of functions of fuzzy variables, Journal
of Intelligent and Fuzzy Systems, Vol.17, No.5, 471-478, 2006.
[264] Zhu Y, and Liu B, Convergence concepts of random fuzzy sequence, Information: An International Journal, Vol.9, No.6, 845-852, 2006.
[265] Zhu Y, and Liu B, Fourier spectrum of credibility distribution for fuzzy variables, International Journal of General Systems, Vol.36, No.1, 111-123, 2007.
[266] Zhu Y, and Liu B, A sufficient and necessary condition for chance distribution
of random fuzzy variables, International Journal of Uncertainty, Fuzziness &
Knowledge-Based Systems, Vol.15, Supp.2, 21-28, 2007.
[267] Zhu Y, Fuzzy optimal control with application to portfolio selection, http://
orsc.edu.cn/process/080117.pdf.
[268] Zimmermann HJ, Fuzzy Set Theory and its Applications, Kluwer Academic
Publishers, Boston, 1985.

List of Frequently Used Symbols


, ,
, ,
,
,
,
Pr
Cr
Ch

E
V
H
(, L, M)
(, A, Pr)
(, P, Cr)
(, P, Cr) (, A, Pr)

<
<n

ai a
ai a
Ai A
Ai A

random, fuzzy, hybrid, or uncertain variables


random, fuzzy, hybrid, or uncertain vectors
membership functions
probability, or credibility density functions
probability, or credibility distributions
probability measure
credibility measure
chance measure
uncertain measure
expected value
variance
entropy
uncertainty space
probability space
credibility space
chance space
empty set
set of real numbers
set of n-dimensional real vectors
maximum operator
minimum operator
a1 a2 and ai a
a1 a2 and ai a
A1 A2 and A = A1 A2
A1 A2 and A = A1 A2

Index
algebra, 213
Borel algebra, 214
Borel set, 214
Brownian motion, 44
canonical process, 207
Cantor function, 220
Cantor set, 214
chance asymptotic theorem, 137
chance density function, 145
chance distribution, 143
chance measure, 131
chance semicontinuity law, 136
chance space, 129
chance subadditivity theorem, 134
Chebyshev inequality, 33, 108, 158
conditional chance, 165
conditional credibility, 114
conditional membership function, 116
conditional probability, 40
conditional uncertainty, 202
convergence almost surely, 35, 110
convergence in chance, 160
convergence in credibility, 110
convergence in distribution, 36, 111
convergence in mean, 36, 110
convergence in probability, 35
countable additivity axiom, 1
countable subadditivity axiom, 178
credibility asymptotic theorem, 57
credibility density function, 72
credibility distribution, 69
credibility extension theorem, 58
credibility inversion theorem, 65
credibility measure, 53
credibility semicontinuity law, 57
credibility space, 60
credibility subadditivity theorem, 55
critical value, 26, 96, 153, 194
distance, 32, 106, 157, 196

entropy, 28, 100, 156, 195


equipossible fuzzy variable, 68
Euler-Lagrange equation, 224
event, 1, 53, 130, 177
expected value, 14, 80, 146, 187
exponential distribution, 10
exponential membership function, 95
extension principle of Zadeh, 77
Fubini theorem, 222
fuzzy calculus, 124
fuzzy differential equation, 128
fuzzy integral equation, 128
fuzzy process, 119
fuzzy random variable, 129
fuzzy sequence, 110
fuzzy variable, 63
fuzzy vector, 64
hazard rate, 42, 118
H
olders inequality, 34, 108, 158, 198
hybrid calculus, 171
hybrid differential equation, 174
hybrid integral equation, 174
hybrid process, 169
hybrid sequence, 160
hybrid variable, 137
hybrid vector, 142
identification function, 184
independence, 11, 74, 151, 193
Ito formula, 48
Ito integral, 47
Ito process, 49
Jensens inequality, 35, 109, 159, 199
Lebesgue integral, 221
Lebesgue measure, 216
Lebesgue-Stieltjes integral, 223
Lebesgue-Stieltjes measure, 222
Lebesgue unit interval, 3
Markov inequality, 33, 108, 158, 198
maximality axiom, 54

246
maximum entropy principle, 30, 103
maximum uncertainty principle, 225
measurable function, 218
membership function, 65
Minkowski inequality, 34, 109, 159
moment, 25, 95, 151, 191
monotone convergence theorem, 222
monotone class theorem, 214
monotonicity axiom, 53, 178
nonnegativity axiom, 1
normal distribution, 10
normal membership function, 93
normality axiom, 1, 53, 178
optimistic value, see critical value
pessimistic value, see critical value
power set, 213
probability continuity theorem, 2
probability density function, 9
probability distribution, 7
probability measure, 3
probability space, 3
product credibility axiom, 61
product credibility space, 62
product credibility theorem, 61
product probability space, 4
product probability theorem, 4
random fuzzy variable, 129
random sequence, 35
random variable, 4
random vector, 5
renewal process, 43, 119, 169, 206
self-duality axiom, 53, 178
set function, 1
-algebra, 213
simple function, 218
singular function, 220
step function, 218
stochastic differential equation, 50
stochastic integral equation, 50
stochastic process, 43
stock model, 124, 171, 208
trapezoidal fuzzy variable, 68
triangular fuzzy variable, 68
uncertain calculus, 209
uncertain differential equation, 211
uncertain integral equation, 211
uncertain measure, 178
uncertain process, 205

Index
uncertain sequence, 200
uncertain variable, 181
uncertain vector, 182
uncertainty asymptotic theorem, 180
uncertainty density function, 185
uncertainty distribution, 185
uncertainty space, 181
uniform distribution, 10
variance, 23, 92, 149, 190
Wiener process, 44

You might also like