You are on page 1of 4

Chapter 3

1. Hamilton-Jacobi-Bellman
Fix a filtered probability space (, F, P ), and let X(t, ) be a Brownian motion with
initial value (X(0) = x0 and parameters (, 2 ). Then X can be written as
X(t) = X(0) + t + W (t), allt, all
where W is a Wiener process. A short hand notation is in the differential form
dX(t) = dt + dW (t)
More generally, suppose X is a diffusion with initial value X(0) = x0 and infinitesimal
parameters ((t, x), (t, x)). Then the differential form, the analog of (1) is
dX(t) = (t, X(t))dt + (t, X(t))dW (t), allt, all
Let F (t, x) be a faunction that is differntiable at least once in t and twice in x. The
total differentia of F (t, X(t, w)) can be approximated with a Taylor series expansion.
This leads to
1
dF =Ft dt + Fx dX + Fx x(dX)2 +
2
(1) =Ft dt + Fx [dt + dW ]
1
+ Fxx [2 (dt)2 + 2dtdW + 2 (dW )2 ] +
2
where the dot indicate higher order terms and and are evaluated at (t, X(t)). Rear-
range terms and drop thode higher than dt and (dW )2 to obtain
1
(2) dF = Ft dt + Fx dt + Fx dW + 2 Fx x(dW )2
2
Since E[dW ] and E[(dW )2 ] = dt, taking expectations in (2) gives
1
E[dF ] = [Ft + Fx + 2 Fxx dt
2
V ar[dF ] = E[dF E[dF ]]2 = 2 Fx2 dt

1.1. Drift and variance stationary. We consider the case of the drift (x) and vari-
ance (x) stationary and F as the discounted value of a stationary function. That is,
F (t, x) = ert f (x), where r 0 is the discount rate. For this case
1
(3) E[d(ert f )] = [rf + f + 2 f ]ert ,
2
where , 2 , f , f , f are evaluated at X(t). For r = 0 this equation is
1
(4) E[d(ert f )] = [f + 2 f ]ert ,
2

1
1.2. Bellman equation. Consider an infinite stream of returns as
Z
(x) = et (x(t))dt
0

with
x = g(x(t)) t 0, x(0) = x0
In (1.2) consider X as a diffusion with infinitesimal parameters (x) and (x).
Define (x0 ) as the expected discounted value of the stream of returns given the initial
state X(0) = x0 ,
Z 
(5) (x0 ) = E et (X(t, ))dt | X(0) = x0
0

For a small interval t equation (4) has the Bellman property


1
(x0 ) = (x0 )t + E [(X(0 + t)) | X(0) = x0 ]
1 + t

Multiply (1 + t) and substract (x0 ) to get


(x0 )t = (x0 )(1 + t)t + E [ | X(0) = x0 ]

Divide by t and take the limit 0


1
v(x0 ) = (x0 ) + E [dv | X(0) = x0 ]
dt
It remains to evaluate

The discounted value of a stationary function F (t, x) = ert f (x)


 
 rt  1 2 rt
E d(e f ) = rf + f + f e dt
2

The Hamilton-Jacobi-Bellman equation

1
(x) = (x) + (x) (x) + 2 (x) (x)
2

2. Occupancy measure and local time


2.1. Occupancy measure.
Z t
m(A, t, ) = 1A (Z(s, ))ds, A B
0

The value m(A, t, ) is the total time the sample path X(,) has spent in the set A up
to date t.
2
Theorem 2.1. There exist a function : Rn [0, ) R+ with the property that
(x, t, ) is jointly continuous in (x, t) for almost every (a.e) and
Z
m(A, t, ) = (x, t, )dx, A B
A
The process (x, t, ) is called a local time of X at level x. It is a measure of the time
that the process has spent at a state x. The theorem suggests that can play the role of
a density.
Theorem 2.2. Let f : R R be a bounded, measurable function
Z t Z
f (X(s, ))ds = f (x)(x, t, )dx, t 0,
0 R
An integral over time can be replaced by an integral over states, weighting outcomes
by their local time . In this sense plays the role of density function. Define the
discounted occupancy measure of the process X, call it m(;r) : B R+ by
Z t
m(A, t, ; r) = ers 1A (X(s, ))ds, A B, t 0,
0
The value m(A, t, ; r) is the total discounted time, discounted at the rate r, that the
sample path X(,) has spent in the set A up to date t.

3. Optional stopping theorem


IF Z is a stochastic process and T a stopping time, let Z(T t) denote the stopped
process defined by

Z(t, ) if t < T ()
Z(T t, ) =
Z(T (), ), if t T ()

Theorem 3.1. Let Z be a (sub) martinagle on the filtered space (, F, P ) and T a


stopping time. Then
(i) E[Z(0)]() = E[Z(T t)] = E[Z(t)]
(ii) If there exists N < such that 0 T () N , all , then
E[Z(0)]() = E[Z(T )]() = E(Z(N )]
3.1. Optional stopping theorem, extended.
Let St denote the set where the process has stopped by date t
St = { : T () t}
The expeted value of the stopped process can be written as the sum
E[Z0 ] = E[ZT t ]
Z
= ZT t ()dP ()
(6)
Z Z
ZT ()dP () + Zt ()dP (), t
St Stc
3
Theorem 3.2. Extension of optional stopping theorem
Let Z be a (sub) martinagle on the filtered space (, F, P ) and T a stopping time. If
(i) P [T < ] = 1
(ii) E[| Z(T ) |] <
(iii) lim E[| Z(t)1T >t |] = 0
t

Theorem 3.3. If {Zk }nk=1 is a (sub) martingale, and 1 , 2 are stopping times with
1 1 2 , then
E[Z2 | F1 ]() = Z1
For proof see Billingley (1995, Theorem 35.2).
3.2. Martingale convergence theorem.

You might also like