Professional Documents
Culture Documents
for any Ftj -adapted (possibly random) variable ej , such that E e2j < ∞.
In this case, the simple function is given by the identity function. To calculate the integral,
use the definition above:
Z t n−1
X
It (f ) = dWs = Wtj+1 − Wtj
0 j=0
= Wt ,
Example 2 To have an intuitive explanation of what the stochastic integral does, regard
Wt as the price per share of an equity at time t (although with a small caveat... can
you guess why?), and think of the sequence of time points in the partition of [0, t] as the
trading dates in the asset. The quantity ej is then the position (i.e. the number of shares)
taken in the asset at each trading date and held to the next one. In such a situation, you
can calculate the gain from your trading strategy at each point in time tj , which is given
by
ej Wtj+1 − Wtj .
The total gain over the time period [0, t] is given by It (f ).
Proposition 4 i) E [I (f )] = 0.
R ∞
ii) E [I 2 (f )] = E ft2 dt (Itô isometry).
0
iv) It (f ) is a P-martingale.
Proof.
i) Using the definition of Itô integral and the properties of the conditional expectation,
we get
Z ∞
E [I (f )] = E ft dWt
0
"∞ #
X
= E ej Wtj+1 − Wtj
j=0
∞
X
= E ej E Wtj+1 − Wtj Ftj
j=0
X∞
= E ej E Wtj+1 − Wtj = 0.
j=0
ii) Since ej is adapted and the increments of a Brownian motion are independent of the
past, then h 2 i
E e2j Wtj+1 − Wtj = (tj+1 − tj ) E e2j ,
(2)
4 4 ITÔ INTEGRALS AND ITÔ CALCULUS
where the last equality follows from the Tower property and the distribution prop-
erty of the Wiener process. Also, for any ti < tj
E ei ej Wti+1 − Wti Wtj+1 − Wtj = E E ei ej Wti+1 − Wti Wtj+1 − Wtj Ftj
= E ei ej Wti+1 − Wti E Wtj+1 − Wtj Ftj
= E ei ej Wti+1 − Wti E Wtj+1 − Wtj
= 0. (3)
Hence
"Z 2 #
∞
2
E I (f ) = E ft dWt
0
!2
∞
X
= E ej Wtj+1 − Wtj
j=0
" ∞
# " #
X 2 XX
e2j Wtj+1 − Wtj
= E +E ei ej Wti+1 − Wti Wtj+1 − Wtj
j=0 i j6=i
∞
X
(tj+1 − tj ) E e2j ,
=
j=0
where the last equality follows from (2) and (3). Therefore
∞
X
2
(tj+1 − tj ) E e2j
E I (f ) =
j=0
"∞ #
X
= E e2j (tj+1 − tj )
j=0
Z ∞
= E ft2 dt .
0
l−1
X
It (f ) = ej Wtj+1 − Wtj
j=0
k−1
X l−1
X
= ej Wtj+1 − Wtj + ej Wtj+1 − Wtj .
j=0 j=k
4.2 The construction of the Itô integral 5
Now calculate
" l−1 #
X
E [It (f ) |Fs ] = E ej Wtj+1 − Wtj |Ftk
j=0
" k−1 # " l−1 #
X X
= E ej Wtj+1 − Wtj |Ftk + E ej Wtj+1 − Wtj |Ftk .
j=0 j=k
E [It (f ) |Fs ] = Is (f ) .
and
1 if t ∈ (0, 1/2]
gt = 2 if t ∈ (1/2, 3/4]
−1 if t ∈ (3/4, 1]
R1 R1
and such that ft = gt = 0 if t > 1. Define I1 (f ) = 0 ft dWt and I1 (g) = 0 gt dWt .
b) Show that
E [I1 (f ) I1 (g)] = 0.
6 4 ITÔ INTEGRALS AND ITÔ CALCULUS
(n)
Theorem 5 Let ft = f (t, ω) ∈ H be an arbitrary process and ft a sequence of simple
processes such that Z ∞ 2
(n)
lim E ft − ft dt = 0.
n→∞ 0
Then Z ∞
(n)
lim ft dWt
n→∞ 0
converges in mean square as n → ∞ to a limit which depends only on ft and not on the
approximating sequence chosen.
(m)
Proof. For the convergence in mean square we need to check if any two sequences, ft
(n)
and ft , of simple processes have the same limit as n → ∞, i.e. if
h i
(n) (m) 2
lim E I f
−I f = 0.
n→∞
Note that
h i h i
(n) (m) 2 (n) (m) 2
E I f
−I f = E I f −f
"Z 2 #
∞ (n)
f − f (m) dWt .
= E
0
as n → 0.
4.2 The construction of the Itô integral 7
(n)
where ft is a sequence of elementary functions satisfying the condition of Theorem 5.
The properties of the Itô integral hold also for the more general stochastic integral
defined above. The proof is based on the same limit procedure used in Theorem 5, but
we do not explore the issue here.
To calculate it, we need to choose the approximating sequence for the function ft = Wt .
The simplest choice is
Wt0 = 0 for 0 ≤ t < t1
Wt
for t1 ≤ t < t2
(n) 1
ft = ..
.
W for t ≤ t < t = T.
tn−1 n−1 n
(n)
You can easily check that the sequence ft satisfies the requirements of Theorem 5. Then,
in virtue of Definition 6, we can calculate the stochastic integral as follows:
Z T Z T
(n)
It (f ) = Wt dWt = lim ft dWt
0 n→∞ 0
n−1 n−1 n−1
!
X X X
Wt2i
= lim Wti Wti+1 − Wti = lim Wti+1 Wti −
n→∞ n→∞
i=0 i=0 i=0
n−1 n−1 n
!
X 1 X 1 X 1
= lim Wti+1 Wti − Wt2i − Wt2i + Wt2n ,
n→∞
i=0
2 i=0
2 i=1
2
where the last equality follows from the fact that Wt0 = 0. Then, set k = i − 1; it follows
that
n−1 n−1 n−1
!
X 1X 2 1X 2 1 2
It (f ) = lim Wti+1 Wti − Wti − Wtk+1 + Wtn
n→∞
i=0
2 i=0
2 k=0
2
n−1
WT2 1 X 2 W 2 T
= − lim Wti+1 − Wti = T − .
2 2 n→∞ i=0 2 2
8 4 ITÔ INTEGRALS AND ITÔ CALCULUS
Therefore
T
WT2 T
Z
Wt dWt = − .
0 2 2
Note that, for the case of any deterministic, continuous and differentiable function g (t),
such that g (0) = 0,
Z T
1
g (t) dg (t) = g 2 (T ) .
0 2
The example shows that the same rule does not apply to Brownian motions, and this
is because of the non-zero quadratic variation property. The term T2 is, in fact, referred
RT
to as the Itô term. Finally, you can check (easy) that It (f ) = 0 Wt dWt is indeed a
martingale.
Rt
Exercise 2 Let Yt := 0
fs dWs for f ∈ H. Show that
Exercise 3 Consider a one-dimensional standard Brownian motion W and let b = {bt : t ∈ [0, T ]}
be a deterministic process such that
Z T
E |bt |2 dt < ∞.
0
Show that h RT i 1 RT 2
E e 0 bt dWt = e 2 0 bt dt
for any 0 ≤ t ≤ T.
for τn → 0. [Hint: you might want to use the time-scale property of the Brownian
motion seen in Question 25, and then use the quadratic variation result].
Show that
T
WT2
Z
Wt ◦ dWt = .
0 2
[Hint: write the approximating sum that defines the Stratonovich integral as the sum
of an approximating sum for the Itô integral over Π/2 = t0 , t0 , t1 , t∗1 , ..., t∗n−1 , tn
∗
and V 2 (Π/2).
The definition of stochastic integral for any H-process can be extended further to the
case of measurable processes f such that
Z T
ft2 dt < ∞ a.s.
0
In this case, the stochastic integral is still a linear mapping and a continuous (local)
martingale, but the Itô isometry no longer holds as the integral
Z T
E ft2 dt
0
Rt
Proposition 7 Let f ∈ S. Then 0
fu dWu is a local martingale.
Proof. Let Z T
Tn = inf T ≥ 0 : ft2 dt ≥n∈N .
0
(n)
Then Tn is a stopping time. Consider now the truncated process ft = ft 1(0,Tn ] . Then
(n)
ft is Ft -measurable. Also
Z ∞ Z ∞ Z Tn
(n) 2
ft 1(0,Tn ] 2 dt = ft2 dt ≤ n
ft dt =
0 0 0
(n)
by definition of Tn . Hence ft ∈ H. This implies that
Z t∧Tn Z t
fu dWu = fu(n) dWu
0 0
is a martingale.
10 4 ITÔ INTEGRALS AND ITÔ CALCULUS
dXt = bt dt + σt dWt ,
Lemma 9 Let X be a one dimensional Itô process described by the stochastic differential
dXt = bt dt + σt dWt .
∂f ∂f 1 ∂2f 2
df (t, Xt ) = dt + dXt + dt
∂t ∂X 2 ∂t2
1 ∂2f 2 ∂2f
+ dX + dtdXt + ...
2 ∂X 2 t ∂t∂X
The required result follows from the properties of the quadratic variation and the cross
variation.
3. Let
and Zt = Xt Yt . Then
Exercise 6 For each of the following processes, state whether it is an Itô process:
The existence and uniqueness conditions for such equations are contained in the fol-
lowing theorem that we state without proof.
Theorem 11 Under the following Lipschitz and growth conditions:
|b (t, x) − b (t, y)| + |σ (t, x) − σ (t, y)| ≤ D |x − y| ∀x, y ∈ R, t ∈ [0, T ] ,
|b (t, x)| + |σ (t, x)| ≤ C (1 + |x|) ∀x ∈ R, t ∈ [0, T ] ,
Xt is the unique solution to the stochastic differential equation
dXt = b (t, Xt ) dt + σ (t, Xt ) dWt
X0 ∈ F 0 .
The key tool to find the solution to a stochastic differential equation is Itô’s lemma.
Example 5 1. Consider the following stochastic differential equation
dXt = σXt dWt .
We can try to find the solution working in analogy with the ordinary differential
equations case. So, let’s try with
Yt = ln Xt .
Then, applying Itô’s lemma, we get
1 1 1
dYt = dXt − 2
(dXt )2
Xt 2 Xt
2
σ
= σdWt − dt.
2
Now, integrating both sides returns
σ2
Xt = X0 eσWt − 2
t
. (6)
If you are not convinced, you can verify that (6) is in effect the solution to the given
stochastic differential equation using Itô’s lemma again. In other words, calculate
the stochastic differential of X from (6) :
σ2 σ2
dXt = − Xt dt + σXt dWt + Xt dt
2 2
= σXt dWt .
4.4 Stochastic differential equations 13
In this case, the analogy with the ODE case is not obvious. When this is the case, we
might look for help in the auxiliary function method. Let’s consider the homogenous
part of the stochastic differential equation (7)
Hence Z t
−µt
Xt e = X0 + σ e−µs dWs ,
0
i.e. Z t
µt
Xt = X0 e + σ eµ(t−s) dWs .
0
Exercise 7 Show that the Ornstein-Uhlenbeck process defined in Unit 4 is Gaussian with
1. Mean-reverting OU process
2.
dXt = (aXt + b) dt + (σXt + β) dW
X0 = x ∈ R
where a, b, σ and β are given real constants.
3.
1
dXt = dt + αXt dWt
X
X = x ∈t R++
0
d eat rt .
b) Using the previous result, obtain the solution rt of the equation (8) .
d) Show that for each t > 0 the random variable rt takes negative values with positive
probability.
where rt is the process satisfying the stochastic differential equation (8). Show that
Rt is Gaussian with
1 − e−at
E (Rt ) = bt + (r0 − b)
a
2 −at
e−2at
σ 3 2e
Var (Rt ) = 2 t − + − .
a 2a a 2a
4.5 Moments of Itô processes 15
a) Show that
t
dWs
Z
Yt = a (1 − t) + bt + (1 − t) .
0 1−s
The process Y is called the Brownian bridge.
(1)
In other words, dmdt = E[b]. If b(t, Xt ) is a linear function of Xt , we have an ODE for
m(1) .
(2)
If we want mt = E[Xt2 |X0 = a], then we must first define Yt = Xt2 , so that
Hence
dm(2)
= E[2Xt b(t, Xt ) + σ 2 (t, Xt )|X0 = a],
dt
which again will be soluble for appropriate b and σ.
a) Find a differential equation satisfied by m(1) (t) = E(Xt ) for 0 < t < 1 and also by
E[Xt |Xt0 = a] for t0 < t < 1.
1−t
b) Show that E[Xt ] = 0 for 0 < t < 1 and that E[Xt |Xt0 = a] = 1−t0
a.
d) Find a differential equation satisfied by m(2) (t) = E(Xt2 ) for 0 < t < 1 and solve it.
∂ 1
M(t, s) = E[dYt ]
∂t dt
sXt 1 2 sXt 2
= E se b(Xt ) + s e σ (Xt ) .
2
If X0 has the steady-state density π(x), then M(t, s) is the same for all t, so that, for
every s,
1 2
Z
sx
0 = s e b(x) + sσ (x) π(x) dx.
2
Rb
The Laplace transform of a function is zero only if the function is zero, i.e. a esx h(x) dx =
0 for all s, if h(x) = 0 for all x ∈ (a, b). Also,
b b
sx dh
Z Z
b
dx = h(x)esx a − s esx h(x) dx.
e
a dx a
Therefore
b
1 d
Z
sx 2
0 = e b(x)π(x) − σ (x)π(x) dx
a 2 dx
1
+ σ 2 (b)π(b)esb − σ 2 (a)π(a)esa .
2
So, for a steady-state distribution to exist, we require that
1 d
σ 2 (x)π(x)
b(x)π(x) =
2 dx
for a < x < b and that σ 2 (b)π(b) = σ 2 (a)π(a) = 0.
dπ
σ2 = −2αxπ(x),
dx
2
giving π(x) = K exp − αx
σ2
. We recognise this as the density of a Normal distribution
with mean 0 and variance σ 2 /(2α).
One possible use of the steady-state distribution is to show that, if a < X0 < b, then
a < Xt < b for all t > 0.
Remark. A stochastic process which possesses a stationary distribution has constant
variance. We have seen that the variance of a martingale increases with time. Therefore
no martingale possesses a stationary distribution.
You should find that there is no density function π(x) which satisfies the differential
equation you derive. Why is this not surprising?
• Mean
t−s
a+ (b − a)
T −s
• Variance
(t − s) (T − t)
T −s
• Covariance
(t ∧ z − s) (T − t ∨ z)
T −s
4.7 The Brownian Bridge and stratified Monte Carlo 19
Note that the Brownian bridge cannot be written as a stochastic integral of a deterministic
integrand, since the variance of the Brownian bridge is a non-monotone function of time,
whilst all stochastic integrals have variance which is non-decreasing in t. However, we can
(a,s)→(b,T )
obtain a process with the same distribution as Bt as a scaled stochastic integral:
Z t
t−s 1
Yt = a + (b − a) + (T − t) dWu .
T −s s T −u
X = Wtj − Wti ;
Y = Wtk − Wtj ;
Z = Wtk − Wti = X + Y.
2
It follows that X and Y are independent; moreover, X ∼ N (0, σX ), Y ∼ N (0, σY2 ) and
2
Z ∼ N (0, σZ ), where
2
σX = tj − ti ;
2
σY = tk − tj ;
σZ2 = tk − ti = σX
2
+ σY2 .
fX (x)fY (y)
fX|Z (x) =
fZ (z)
1 1 x−Az 2
= √ e− 2 ( B ) ,
B 2π
2
where A = σX /σZ2 and B = σX σY /σZ . Hence, we can claim that, conditioning on the
knowledge of the process value at time tk , Wtj − Wti ∼ N (Az, B 2 ); from which it follows
that s
tk − tj tj − ti (tk − tj )(tj − ti )
Wtj = Wti + Wtk + ε, ε ∼ N(0, 1).
tk − ti tk − ti tk − ti
But this is the Brownian bridge from Wti to Wtk on [ti , tk ].
Given this property of the Brownian bridge, we can use this process together with
stratification to generate trajectories of the Brownian motion, which are better spread
over the probability space. The idea is that you generate first a stratified sample from
a normal distribution, with mean 0 and variance T . This gives you a random sample
of the Brownian motion at the end of the observation period. Then, you “fill in” the
20 4 ITÔ INTEGRALS AND ITÔ CALCULUS
2 0.4
1.5
0.3
1
0.2
0.5
Bt = µ t + σ Wt
0.1
t
W
0
−0.5
−0.1
−1
−0.2
−1.5
−2 −0.3
0 28 56 84 112 140 168 196 224 252 280 308 336 364 0 28 56 84 112 140 168 196 224 252 280 308 336 364
1.5
1.4
1.3
1.2
t
(µ − σ /2)t + σ W
2
1.1
X =e
t
0.9
0.8
0.7
0 28 56 84 112 140 168 196 224 252 280 308 336 364
Figure 1: 10 stratified sample trajectories of the Wiener process, the arithmetic Brownian
motion and the geometric Brownian motion. Parameter set: T = 1 year; µ = 0.1 p.a.; σ = 0.2
p.a.
4.8 Further sample exam questions 21
values that you need to generate the full trajectory between time 0 and time T using
the Brownian bridge. Figure 1 shows sample trajectories for the Wiener process, the
arithmetic Brownian motion and the geometric Brownian motion generated using this
technique. If you compare them with the trajectories illustrated in Figure 3.3, you can
see that the stratification approach with Brownian bridge reduces the variance of your
Monte Carlo estimate.