You are on page 1of 10

Chapter 4

Wiener Process
Because of its central role in what follows, it is worthwhile to devote one entire
chapter to the study of Wiener process, also known as Brownian motion.
4.1 The Invariance Principle
Let
n

nN
be a sequence of i.i.d. random variables such that
E
n
= 0, E
2
n
= 1,
and dene
S
0
= 0, S
n
=
n

i=1

i
.
Viewed as a function of the discrete time n, S
n
gives the instantaneous position
of a random walker on Z, see gure 4.1. We wish to rescale both time and space
so as to dene a random function dene on t [0, 1] and taking value in R.
Recall that the Central Limit Theorem asserts that
S
N

N
N(0, 1) (4.1)
in distribution as N . This suggest to rescale S
n
and dene a piecewise
constant random function W
N
(t) on t [0, 1] by letting
W
N
(t) =
S
Nt

N
, (4.2)
This function is shown in gure 4.2 for dierent N. We have:
Theorem 4.1.1 (Donsker). As N ,, W
N
() converges to a limit W() in
the sense of distributions
W
N
d
W (4.3)
W() is the Wiener process.
35
36 CHAPTER 4. WIENER PROCESS
0 10 20 30 40 50 60 70 80 90 100
20
15
10
5
0
5
10
15
n
X
n
Figure 4.1: Three realizations of the (unrescaled) random walk Sn for n [0, 100].
We shall not prove Theorem 4.1.1. Rather we will take for granted that the
limiting process W() exists and we shall study its properties. In accordance
with standard notations for stochastic processes, from now on we indicate the
time-dependency of a random function by writing t as a subscript, e.g.
W
N
t
= W
N
(t), W
t
= W(t), etc.
4.2 Elementary Properties of W
t
Since for t > 0
W
N
t
=
S
Nt

N
=
S
Nt
_
Nt|
_
Nt|

n
d
N(0, 1)

t
d
= N(0, t), (4.4)
we deduce that W
t
at xed t is distributed as N(0, t).
Now consider the random variable W
t
W
s
for 0 s < t. Since S
n
S
m
for 0 m < n has the same distribution as S
nm
, it follows that
W
t
W
s
d
= W
ts
, 0 s < t.
Similarly, S
n
S
m
and S
p
S
n
are independent random variables when 0
m < n < p. This implies that
W
t
W
s
and W
u
W
t
are independent when 0 s < t < u.
4.2. ELEMENTARY PROPERTIES OF W
T
37
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
1.5
1
0.5
0
0.5
1
1.5
2
2.5
3
Figure 4.2: Realizations of W
N
t
for N = 100 (blue), N = 400 (red), and N = 10000
(green).
This is enough information to deduce the joint density of (W
t1
, W
t2
, . . . , W
tn
)
for 0 t
1
< t
2
. . . < t
n
1, n N arbitrary. The density is given by

t1,t2,...,tn
(x
1
, . . . , x
n
) =
tntn1
(x
n
[x
n1
) . . .
t2t1
(x
2
[x
1
)
t1
(x
1
[0), (4.5)
where

t
(x[y) =
e
(yx)
2
/2t

2t
(t > 0).
Exercise 4.2.1. Show that
E(W
t
W
s
)
2
= [t s[, annd EW
t
W
s
= min(t, s).
Exercise 4.2.2. Show that the Wiener process is self-similar, in the sense that
for > 0
W
t
d
=
1/2
W
t
.
This indicates that it suces to study the Wiener process on t [0, 1] to deduce
its statistical properties on any time interval.

t
(x[y) also is the conditional probability density of W
t+s
given that W
s
= y.
It can be shown by direct calculation that

t+s
(x[y) =
_
R

t
(x[z)
s
(z[y)dz, (0 < s < t)
This equation is called the Chapman-Kolmogorov equation, and it implies that
W
t
is Markov. Indeed, by integrating by sides of this equation over x in the
38 CHAPTER 4. WIENER PROCESS
interval I, it can also be written as (u 0)
P(W
t+s+u
I[W
u
= y) =
_
R
P(W
t+s+u
I[W
s+u
= z)P(W
s+u
dz[W
u
= y).
This states that the conditional probability that W
t+s+u
I given that W
u
= y
is the conditional probability density that W
s+u
= z for some z given that W
u
=
y, times the conditional probability that W
t+s+u
I given that W
s+u
= z.
The transition probability
t
(x[y) is nothing but the heat kernel, i.e. the
fundamental solution of

t
t
=
1
2

t
x
2
So far we have considered Wiener process that starting at 0, W
0
= 0. Given
y R
1
, it is trivial to dene Wiener process that starts at y, by letting

W
t
= W
t
+y.
The properties of

W
t
are the same as those of W
t
. Denote by E
y
expectation
with respect to Wiener process that starts at y, let f be a smooth function, and
let
u(y, t) = E
y
f(

W
t
)
=
_
R
f(x)
t
(x[y)dx
=
_
R
f(x)
e
(xy)
2
/2t

2
dx
Hence u satises
u
t
=
1
2

2
u
y
2
u(y, 0) = f(y).
4.3 Alternative Expression for W
t
.
A dierent way of constructing the Wiener process is the following. Recall that
a real function g(t) dened on [0, 1] is in L
2
[0, 1] if
_
1
0
g
2
(t)dt < .
Let f
k

kS
where S is a numerable set like e.g. N or Z be an orthonormal
basis of L
2
[0, 1], meaning that
_
1
0
f
k
(t)f
q
(t)dt =
_
1 if k = q
0 otherwise
and any g(t) L
2
[0, 1] can be represented as
g(t) =

kS

k
f
k
(t) where
k
=
_
1
0
g(t)f
k
(t)dt
4.3. ALTERNATIVE EXPRESSION FOR W
T
. 39
Let
k
be a sequence of i.i.d. normal random variables. Then
W
t
=

kS

k
_
t
0
f
k
(s)ds.
is the Wiener process. This can be veried as follows. W
t
is obviously Gaus-
sian since it is the linear combination of Gaussian random variables. Further-
more, EW
t
= 0, and
EW
t
W
s
=

k,jS
E(
k

q
)
_
t
0
f
k
()d
_
s
0
f
q
(

)d

kS
_
t
0
f
k
()d
_
s
0
f
k
(

)d

.
Denote by
t
the indicator function of the interval [0, t], i.e.

t
() =
_
1 if [0, t]
0 otherwise
Then

t
() =

kS
_
_
t
0
f
k
(

)d

_
f
k
().
Using Parseval equality
1
we have

k
_
_
t
0
f
k
()d
__
_
s
0
f
k
(

)d

_
=
_
1
0

t
()
s
()d = s t.
Hence
EW
t
W
s
= t s.
This proves that W
t
is a Wiener process.
There are many ways to construct f
k
. One possibility is the Haar basis
where
f
j
k
(t) = 2
j/2
f(2
j
t k), j, k Z
with
f(t) =
_

_
1 if x (0,
1
2
]
1 if x (
1
2
, 1]
0 otherwise
1
If g(t) =
P
k

k
f
k
(t) and h(t) =
P
k

k
f
k
(t), Parseval equality states that
Z
1
0
g(t)h(t)dt =
X
kS

k
This can be shown by inserting the representations of g(t) and h(t) in terms of f
k
(t) in the
integral at the left hand-side, and using the orthonormality of the f
k
(t).
40 CHAPTER 4. WIENER PROCESS
4.4 The Karhunen-Lo`eve expansion of W
t
Since W
t
is a Gaussian process, it can be represented by the Karhunen-Lo`eve
expansion introduced in the Appendix. Since the covariance function of W
t
is
K(t, s) = t s the eigenvalue problem reads
_
1
0
t s(s)ds = (t)
or, equivalently,
_
t
0
s(s)ds +t
_
1
t
(s)ds = (t)
Note that this equation implies that (0) = 0. Taking the time-derivative of
this equation gives
_
1
t
(s)ds = (t),
where = d/dt. This equation implies that varphi(1) = 0. Taking the time-
derivative one more time gives
(t) = (t),
where = d
2
/dt
2
. The general solution of this equation for > 0 is
(t) = Asin(t/

) +Bcos(t/

),
where A, B are constants. The boundary condition (0) = 0 implies that B = 0.
On the other hand, the boundary condition
dot(1) = 0 can only be satised for specic values of :

k
=
4
(2k + 1)
2

2
, k = 0, 1, . . . .
And A is then xed by the orthonormality condition of
k
(t):
1 =
_
1
0

2
k
(t)dt = A
2
_
1
0
sin
2
_
(k +
1
2
)t
_
dt =
A
2
2
,
i.e. A =

2. Therefore the W
t
can be represented via Karhunen-Lo`eve expan-
sion as
W
t
=

k0

k
2
(2k + 1)
sin
_
(k +
1
2
)t
_
,
where
k
are i.i.d. Gaussian random variables with mean zero and variance
one.
As an application of the Karhunen-Lo`eve expansion, we compute
A = Eexp
_

_
1
0
W
2
t
dt
_
,
4.5. THE WIENER MEASURE 41
where 0 is a parameter. This expectation is the Laplace transform of the
density of the random variable Z =
_
1
0
W
2
t
dt. Inserting the Karhunen-Lo`eve
expansion for W
t
in this expectation gives
A = Eexp
_

k,q0

k
q
_

q
_
1
0

k
(t)
q
(t)dt
_
with
k
,
k
(t) as above. Using the orthonormality of the
k
(t), this reduces
to
A = Eexp
_

k0

2
k

k
_
= E

k0
e

2
k
.
Since the
k
are independent and for 0
Ee

2
k
=
_
R
e
muz
2 e

1
2
z
2

2
dz =
1

2 + 1
,
we obtain
A =

k0
1

2
k
+ 1
=
1
_
g()
,
where
g() =

k0
(2
k
+ 1).
From the expression above the zeroes of h(z) = g(z
2
), viewed as a function of
the complex variable z, are
z
k
=
i

2
k
=
1
4

2(2k + 1).
This implies that g(z
2
) = sinh(

2z), i.e.
A =
1
_
sinh(

2)
.
4.5 The Wiener Measure
The distribution on C[0, 1] of the Wiener process that we just constructed is
the Wiener measure. Note that we can express

t1,...,tn
(y
1
, . . . , y
n
) = Z
1
n
expI
t1...,tn
(y
1
, . . . , y
n
)
Here Z
n
is a normalization factor,
Z
n
= (2)
n/2
(t
1
(t
2
t
1
) . . . (t
n
t
n1
))
1/2
,
and
I
t1,...,tn
(y
1
, . . . , y
n
) =
1
2
n

i=1
(y
i
y
i1
)
2
t
i
t
i1
42 CHAPTER 4. WIENER PROCESS
where t
0
= 0, y
0
= 0. Note that I
t1,...,tn
(y
1
, . . . , y
n
)
I
t1,...,tn
(y
1
, . . . , y
n
) =
1
2
n

i=1
_
W
ti
W
ti1
t
i
t
i1
_
2
(t
i
t
i1
)
Therefore if we think of the y
i
as the values of some function h(t) at times t
i
,
i.e. y
i
= h(t
i
), I
t1,...,tn
(y
1
, . . . , y
n
) becomes an approximation to the functional
I[h()] =
1
2
_
1
0

h
2
(t)dt.
Since the Wiener measure should result in the limit as n , except for
the factor Z
n
, we can express the Wiener measure formally as
d
W
= Z
1
exp(I[h()])Dh().
Of cousre, this expression is purely formal. The right hand side is in the form
of an innite dimensional Lebesgue measure Dh() =

0t1
dh(t) (which does
not exist) with a density Z
1
e
I[W]
, and Z is an normalization factor, which
viewed as the limit of Z
n
as n is zero. Nevertheless this expression is
the basis for path integral techniques and is useful. The idea is that, given
a functional A[W

] of the Wiener process, its expectation can in principle be


computed as
EA[W

] =
_
A[h()] exp(I[h()])Dh()
_
exp(I[h()])Dh()
.
The path integral at the numerator in this expression accounts for the normal-
ization factor Z. Since the latter is zero, we already know that the path integral
at the denominator must also vanish in order for the expectation to be nite.
To see how this comes about, let us compute the expectation
A = Eexp
_

_
1
0
W
2
t
dt
_
,
which we have already determined in the last section using Karhunen-Lo`eve
expansion. Here we shall use the expression
A =
B
1
B
2
,
where B
1
, B
2
are the following path integrals
B
1
=
_
exp
_

_
1
0
h
2
(t)dt
1
2
_
1
0

h
2
(t)dt
_
Dh(),
B
2
=
_
exp
_

1
2
_
1
0

h
2
(t)dt
_
Dh().
Both are Gaussian integrals, which mimicking what one does in the nite di-
mensional setting, we know how to evaluate by determining the eigenvalues
4.6. PROPERTIES OF THE WIENER PATH 43
of the symmetric Kernel of the quadratic forms in the exponential. Letting
h(0) =

h(1) = 0, after integration by parts, these quadratic forms for B
1
and
B
2
are respectively
h, K
1
h) =
_
1
0
h(t)
_
2
d
dt
2
_
h(t)dt,
and
h, K
2
h) =
_
1
0
h(t)
_

d
dt
2
_
h(t)dt.
The eigenvalues of the symmetric operators K
1
and K
2
can be determined by
solving the boundary value problems
2 = , and

= ,
with boundary conditions (0) = (0) = (1) =

(1) = 0. This is done as in
the last section, and the eigenvalues one obtains are

k
= 2 +
1
4
(2k + 1)
2

2
and
k
=
1
4
(2k + 1)
2

2
, k = 0, 1, . . .
Therefore, A can be expressed as the ratio of the following two innite products,
A =

k0
1
4
(2k + 1)
2

k0
_
2 +
1
4
(2k + 1)
2

2
_
Both these products are innite. However, pairing terms at the denominator
and the numerator, their ratio can also be expressed as
A =

k0

1
4
(2k + 1)
2

2
_
2 +
1
4
(2k + 1)
2

2
_ =

k0
1
_
(2
k
+ 1)
,
where
k
= 4/(2k+1)
2

2
are the eigenvalues o the Karhunen-Lo`eve expansion
we determined in the last section. Therefore we recover the correct result for A.
Here is another example. Suppose V (x) is a
4.6 Properties of the Wiener Path
Lemma 4.6.1 (Independent increments). Given 0 t
1
< t
2
< t
3
< t
4
1,
then W
t2
W
t1
, W
t4
W
t3
are independent.
This is a direct consequence of (4.5).
44 CHAPTER 4. WIENER PROCESS
Let

=
_
f C[0, 1] : sup
0s,t1
[f(t) f(s)[
[t s[

<
_
,

BV
= f C[0, 1] : f has bounded variation,

H
1 =
_
f C[0, 1] :
_
1
0
_
df
d
_
2
d <
_
.

is the set of functions that are Holder continuous with exponent . Then
Theorem 4.6.1 (Regularity of the path).
1. If 0 <
1
2
, then PW

= 1. If
1
2
, then PW

= 0.
2. PW
BV
= 0.
3. PW
H
1 = 0.
In fact, we can dene the quadratic variations of f by
Q(f) = lim
0

j
(f(t
j
) f(t
j1
))
2
,
where denotes the partition of [0, 1], = t
0
= 0, t
1
, . . . , t
n1
, t
n
= 1, and
|| = max
1jn
(t
j
t
j1
).
Then
Theorem 4.6.2. PQ(W) = 1 = 1 i.e. almost all the paths have the same
quadratic variation.
Moreover, for any xed T
1
, T
2
[0, 1], with T
1
< T
2
, we can dene
Q
T1,T2
(f) = lim
0

tj1,tj(T1,T2]
(f(t
j
) f(t
j1
))
2
.
Then
Theorem 4.6.3. PQ
T1,T2
(W) = T
2
T
1
= 1.
This statement is sometimes written formally as
(dW
t
)
2
= dt.

You might also like