You are on page 1of 21

It calculus, named after Kiyoshi It, extends the methods of calculus to stochastic processes such as Brownian motion (Wiener

process). It has important applications in mathematical finance and stochastic differential equations. The central concept is the It stochastic integral. This is a generalization of the ordinary concept of a RiemannStieltjes integral. The generalization is in two respects. Firstly, we are now dealing with random variables (more precisely, stochastic processes). Secondly, we are integrating with respect to a nondifferentiable function (technically, stochastic process). The It integral allows one to integrate one stochastic process (the integrand) with respect to another stochastic process (the integrator). It is common for the integrator to be the Brownian motion (also see Wiener process). The result of the integration is another stochastic process. In particular, the integral from to any particular is a random variable. This random variable is defined as a limit of a certain sequence of random variables. (There are several equivalent ways to construct a definition.) Roughly speaking, we are choosing a sequence of partitions of the interval from to . Then we are constructing Riemann sums. However, it is important which point in each of the small intervals is used to compute the value of the function. Typically, the left end of the interval is used. (It is conceptualized in mathematical finance as that we are first deciding what to do, then observing the change in the prices. The integrand is how much stock we hold, the integrator represents the movement of the prices, and the integral is how much money we have in total including what our stock is worth, at any given moment.) Every time we are computing a Riemann sum, we are using a particular instantiation of the integrator. The limit then is taken in probability as the mesh of the partition is going to zero. (Numerous technical details have to be taken care of to show that this limit exists and is independent of the particular sequence of partitions.) The usual notation for the It stochastic integral is:

where X is a Brownian motion or, more generally, a semimartingale and H is a locally squareintegrable process adapted to the filtration generated by X (Revuz & Yor 1999, Chapter IV). The paths of Brownian motion fail to satisfy the requirements to be able to apply the standard techniques of calculus. In particular, it is not differentiable at any point and has infinite variation over every time interval. As a result, the integral cannot be defined in the usual way (see RiemannStieltjes integral). The main insight is that the integral can be defined as long as the integrand H is adapted, which loosely speaking means that its value at time t can only depend on information available up until this time. The prices of stocks and other traded financial assets can be modeled by stochastic processes such as Brownian motion or, more often, geometric Brownian motion (see BlackScholes). Then, the It stochastic integral represents the payoff of a continuous-time trading strategy consisting of holding an amount Ht of the stock at time t. In this situation, the condition that H is adapted corresponds to the necessary restriction that the trading strategy can only make use of the available information at any time. This prevents the possibility of unlimited gains through high frequency trading: buying the stock just before each uptick in the market and selling before each downtick. Similarly, the condition that H is adapted implies that the stochastic integral will not diverge when calculated as a limit of Riemann sums (Revuz & Yor 1999, Chapter IV).

Important results of It calculus include the integration by parts formula and It's lemma, which is a change of variables formula. These differ from the formulas of standard calculus, due to quadratic variation terms.

Contents
[hide]

1 Notation 2 Integration with respect to Brownian motion 3 It processes 4 Semimartingales as integrators 5 Properties 6 Integration by parts 7 It's lemma 8 Martingale integrators o 8.1 Local martingales o 8.2 Square integrable martingales o 8.3 p-Integrable martingales 9 Existence of the integral 10 Differentiation in It calculus o 10.1 Malliavin derivative o 10.2 Martingale representation 11 It calculus for physicists 12 See also 13 References

[edit] Notation
The process Y defined as before as

is itself a stochastic process with time parameter t, which is also sometimes written as Y = H X (Rogers & Williams 2000). Alternatively, the integral is often written in differential form dY = H dX, which is equivalent to Y Y0 = H X. As It calculus is concerned with continuous-time stochastic processes, it is assumed that an underlying filtered probability space is given

The sigma algebra Ft represents the information available up until time t, and a process X is adapted if Xt is Ft-measurable. A Brownian motion B is understood to be an Ft-Brownian motion, which is just a standard Brownian motion with the properties that Bt is Ft-measurable and that Bt+s Bt is independent of Ft for all s,t 0 (Revuz & Yor 1999).

[edit] Integration with respect to Brownian motion


2

The It integral can be defined in a manner similar to the RiemannStieltjes integral, that is as a limit in probability of Riemann sums; such a limit does not necessarily exist pathwise. Suppose that B is a Wiener process (Brownian motion) and that H is a left-continuous, adapted and locally bounded process. If {n} is a sequence of partitions of [0, t] with mesh going to zero, then the It integral of H with respect to B up to time t is a random variable

It can be shown that this limit converges in probability. For some applications, such as martingale representation theorems and local times, the integral is needed for processes that are not continuous. The predictable processes form the smallest class that is closed under taking limits of sequences and contains all adapted left-continuous processes. If H is any predictable process such that 0t H2 ds < for every t 0 then the integral of H with respect to B can be defined, and H is said to be B-integrable. Any such process can be approximated by a sequence Hn of left-continuous, adapted and locally bounded processes, in the sense that

in probability. Then, the It integral is

where, again, the limit can be shown to converge in probability. The stochastic integral satisfies the It isometry

which holds when H is bounded or, more generally, when the integral on the right hand side is finite.

[edit] It processes
An It process is defined to be an adapted stochastic process which can be expressed as the sum of an integral with respect to Brownian motion and an integral with respect to time,

Here, B is a Brownian motion and it is required that is a predictable B-integrable process, and is predictable and (Lebesgue) integrable. That is,

for each t. The stochastic integral can be extended to such It processes,

This is defined for all locally bounded and predictable integrands. More generally, it is required that H be B-integrable and H be Lebesgue integrable, so that 0t(H22 + |H |) ds is finite. Such predictable processes H are called X-integrable. An important result for the study of It processes is It's lemma. In its simplest form, for any twice continuously differentiable function on the reals and It process X as described above, it states that (X) is itself an It process satisfying

This is the stochastic calculus version of the change of variables formula and chain rule. It differs from the standard result due to the additional term involving the second derivative of , which comes from the property that Brownian motion has non-zero quadratic variation.

[edit] Semimartingales as integrators


The It integral is defined with respect to a semimartingale X. These are processes which can be decomposed as X = M + A for a local martingale M and finite variation process A. Important examples of such processes include Brownian motion, which is a martingale, and Lvy processes. For a left continuous, locally bounded and adapted process H the integral H X exists, and can be calculated as a limit of Riemann sums. Let n be a sequence of partitions of [0, t] with mesh going to zero,

This limit converges in probability. The stochastic integral of left-continuous processes is general enough for studying much of stochastic calculus. For example, it is sufficient for applications of It's Lemma, changes of measure via Girsanov's theorem, and for the study of stochastic differential equations. However, it is inadequate for other important topics such as martingale representation theorems and local times. The integral extends to all predictable and locally bounded integrands, in a unique way, such that the dominated convergence theorem holds. That is, if Hn ;H and |Hn| J for a locally bounded process J, then 0t Hn dX 0t H dX in probability. The uniqueness of the extension from left-continuous to predictable integrands is a result of the monotone class lemma. In general, the stochastic integral H X can be defined even in cases where the predictable process H is not locally bounded. If K = 1 / (1 + |H|) then K and KH are bounded. Associativity 4

of stochastic integration implies that H is X-integrable, with integral H X = Y, if and only if Y0 = 0 and K Y = (KH) X. The set of X-integrable processes is denoted by L(X).

[edit] Properties
The following properties can be found for example in (Revuz & Yor 1999) and (Rogers & Williams 2000):

The stochastic integral is a cdlg process. Furthermore, it is a semimartingale. The discontinuities of the stochastic integral are given by the jumps of the integrator multiplied by the integrand. The jump of a cdlg process at a time t is Xt Xt, and is often denoted by Xt. With this notation, (H X) = H X. A particular consequence of this is that integrals with respect to a continuous process are always themselves continuous. Associativity. Let J, K be predictable processes, and K be X-integrable. Then, J is K X integrable if and only if JK is X integrable, in which case

Dominated convergence. Suppose that Hn H and |Hn| J, where J is an X-integrable process. then Hn X H X. Convergence is in probability at each time t. In fact, it converges uniformly on compacts in probability. The stochastic integral commutes with the operation of taking quadratic covariations. If X and Y are semimartingales then any X-integrable process will also be [X, Y]integrable, and [H X, Y] = H [X, Y]. A consequence of this is that the quadratic variation process of a stochastic integral is equal to an integral of a quadratic variation process,

[edit] Integration by parts


As with ordinary calculus, integration by parts is an important result in stochastic calculus. The integration by parts formula for the It integral differs from the standard result due to the inclusion of a quadratic covariation term. This term comes from the fact that It calculus deals with processes with non-zero quadratic variation, which only occurs for infinite variation processes (such as Brownian motion). If X and Y are semimartingales then

where [X, Y] is the quadratic covariation process. The result is similar to the integration by parts theorem for the RiemannStieltjes integral but has an additional quadratic variation term.

[edit] It's lemma


Main article: It's lemma It's lemma is the version of the chain rule or change of variables formula which applies to the It integral. It is one of the most powerful and frequently used theorems in stochastic calculus. For a continuous d-dimensional semimartingale X = (X1,,Xd) and twice continuously differentiable function f from Rd to R, it states that f(X) is a semimartingale and,

This differs from the chain rule used in standard calculus due to the term involving the quadratic covariation [Xi,Xj ]. The formula can be generalized to non-continuous semimartingales by adding a pure jump term to ensure that the jumps of the left and right hand sides agree (see It's lemma).

[edit] Martingale integrators


[edit] Local martingales
An important property of the It integral is that it preserves the local martingale property. If M is a local martingale and H is a locally bounded predictable process then H M is also a local martingale. For integrands which are not locally bounded, there are examples where H M is not a local martingale. However, this can only occur when M is not continuous. If M is a continuous local martingale then a predictable process H is M-integrable if and only if 0tH2 d[M] is finite for each t, and H M is always a local martingale. The most general statement for a discontinuous local martingale M is that if (H2 [M])1/2 is locally integrable then H M exists and is a local martingale.

[edit] Square integrable martingales


For bounded integrands, the It stochastic integral preserves the space of square integrable martingales, which is the set of cdlg martingales M such that E(Mt2) is finite for all t. For any such square integrable martingale M, the quadratic variation process [M] is integrable, and the It isometry states that

This equality holds more generally for any martingale M such that H2 [M]t is integrable. The It isometry is often used as an important step in the construction of the stochastic integral, by defining H M to be the unique extension of this isometry from a certain class of simple integrands to all bounded and predictable processes.

[edit] p-Integrable martingales


6

For any p > 1, and bounded predictable integrand, the stochastic integral preserves the space of p-integrable martingales. These are cdlg martingales such that E(|Mt|p) is finite for all t. However, this is not always true in the case where p = 1. There are examples of integrals of bounded predictable processes with respect to martingales which are not themselves martingales. The maximum process of a cdlg process M is written as Mt* = sups t |Ms|. For any p 1 and bounded predictable integrand, the stochastic integral preserves the space of cdlg martingales M such that E((Mt*)p) is finite for all t. If p > 1 then this is the same as the space of p-integrable martingales, by Doob's inequalities. The BurkholderDavisGundy inequalities state that, for any given p 1, there exist positive constants c, C that depend on p, but not M or on t such that

for all cdlg local martingales M. These are used to show that if (Mt*)p is integrable and H is a bounded predictable process then

and, consequently, H M is a p-integrable martingale. More generally, this statement is true whenever (H2 [M])p/2 is integrable.

[edit] Existence of the integral


Proofs that the It integral is well defined typically proceed by first looking at very simple integrands, such as piecewise constant, left continuous and adapted processes where the integral can be written explicitly. Such simple predictable processes are linear combinations of terms of the form Ht = A1{t > T} for stopping times T and FT-measurable random variables A, for which the integral is

This is extended to all simple predictable processes by the linearity of H X in H. For a Brownian motion B, the property that it has independent increments with zero mean and variance Var(Bt) = t can be used to prove the It isometry for simple predictable integrands,

By a continuous linear extension, the integral extends uniquely to all predictable integrands satisfying E(0tH 2ds) < in such way that the It isometry still holds. It can then be extended to all B-integrable processes by localization. This method allows the integral to be defined with respect to any It process.

For a general semimartingale X, the decomposition X = M + A for a local martingale M and finite variation process A can be used. Then, the integral can be shown to exist separately with respect to M and A and combined using linearity, HX = HM + HA, to get the integral with respect to X. The standard LebesgueStieltjes integral allows integration to be defined with respect to finite variation processes, so the existence of the It integral for semimartingales will follow from any construction for local martingales. For a cdlg square integrable martingale M, a generalized form of the It isometry can be used. First, the DoobMeyer decomposition theorem is used to show that a decomposition M 2 = N + <M> exists, where N is a martingale and <M> is a right-continuous, increasing and predictable process starting at zero. This uniquely defines <M>, which is referred to as the predictable quadratic variation of M. The It isometry for square integrable martingales is then

which can be proved directly for simple predictable integrands. As with the case above for Brownian motion, a continuous linear extension can be used to uniquely extend to all predictable integrands satisfying E(H 2 <M>t) < . This method can be extended to all local square integrable martingales by localization. Finally, the DoobMeyer decomposition can be used to decompose any local martingale into the sum of a local square integrable martingale and a finite variation process, allowing the It integral to be constructed with respect to any semimartingale. Many other proofs exist which apply similar methods but which avoid the need to use the DoobMeyer decomposition theorem, such as the use of the quadratic variation [M] in the It isometry, the use of the Dolans measure for submartingales, or the use of the Burkholder DavisGundy inequalities instead of the It isometry. The latter applies directly to local martingales without having to first deal with the square integrable martingale case. Alternative proofs exist only making use of the fact that X is cdlg, adapted, and the set {HXt: |H |1 is simple previsible} is bounded in probability for each time t, which is an alternative definition for X to be a semimartingale. A continuous linear extension can be used to construct the integral for all left-continuous and adapted integrands with right limits everywhere (caglad or L-processes). This is general enough to be able to apply techniques such as It's lemma (Protter 2004). Also, a Khintchine inequality can be used to prove the dominated convergence theorem and extend the integral to general predictable integrands (Bichteler 2002).

[edit] Differentiation in It calculus


The It calculus is first and foremost defined as an integral calculus as outlined above. However, there are also different notions of "derivative" with respect to Brownian motion:

[edit] Malliavin derivative


Malliavin calculus provides a theory of differentiation for random variables defined over Wiener space, including an integration by parts formula (Nualart 2006).

[edit] Martingale representation


8

The following result allows to express martingales as It integrals: if M is a square-integrable martingale on a time interval [0, T] with respect to the filtration generated by a Brownian motion B, then there is a unique adapted square integrable process on [0, T] such that

almost surely, and for all t [0, T] (Rogers & Williams 2000, Theorem 36.5). This representation theorem can be interpreted formally as saying that is the time derivative of M with respect to Brownian motion B, since is precisely the process that must be integrated up to time t to obtain Mt M0, as in deterministic calculus.

[edit] It calculus for physicists


In physics, usually stochastic differential equations, also called Langevin equations, are used, rather than general stochastic integrals. A physicist would formulate an It stochastic differential equation (SDE) as

where is Gaussian white noise with summation convention is used. If is a function of the , then It's lemma has to be used:

and Einstein's

An It SDE as above also corresponds to a Stratonovich SDE which reads

SDEs frequently occur in physics in Stratonovich form, as limits of stochastic differential equations driven by colored noise if the correlation time of the noise term approaches zero. For a recent treatment of different interpretations of stochastic differential equations see for example (Lau & Lubensky 2007).

It's lemma
From Wikipedia, the free encyclopedia Jump to: navigation, search In mathematics, It's lemma is used in It stochastic calculus to find the differential of a function of a particular type of stochastic process. It is named after its discoverer, Kiyoshi It. It is the stochastic calculus counterpart of the chain rule in ordinary calculus and is best memorized using the Taylor series expansion and retaining the second order term related to the stochastic component change. The lemma is widely employed in mathematical finance and its 9

best known application is in the derivation of the BlackScholes equation used to value options. Ito's Lemma is also referred to currently as the ItDoeblin Theorem in recognition of the recently discovered work of Wolfgang Doeblin.[1]

Contents
[hide]

1 Mathematical formulation of It's lemma o 1.1 It drift-diffusion processes o 1.2 Continuous semimartingales o 1.3 Poisson jump processes o 1.4 Non-continuous semimartingales 2 Informal derivation 3 Examples o 3.1 Geometric Brownian motion o 3.2 The Dolans exponential o 3.3 BlackScholes formula 4 See also 5 Notes 6 References 7 External links

[edit] Mathematical formulation of It's lemma


In the following subsections we discuss versions of It's lemma for different types of stochastic processes.

[edit] It drift-diffusion processes


In its simplest form, It's lemma states the following: for an It drift-diffusion process

and any twice differentiable function (t, x) of two real variables t and x, one has

This immediately implies that (t, X) is itself an It drift-diffusion process. In higher dimensions, Ito's lemma states

10

where differential w.r.t. t, w.r.t. X.

is a vector of It processes, is the gradient of w.r.t. X, and

is the partial

is the Hessian matrix of

[edit] Continuous semimartingales


More generally, the above formula also holds for any continuous d-dimensional semimartingale X = (X1,X2,,Xd), and twice continuously differentiable and real valued function f on Rd. Some people prefer to present the formula in another form with cross variation shown explicitly as follows, f(X) is a semimartingale satisfying

In this expression, the term fi represents the partial derivative of f(x) with respect to xi, and [Xi,Xj ] is the quadratic covariation process of Xi and Xj.

[edit] Poisson jump processes


We may also define functions on discontinuous stochastic processes. Let h be the jump intensity. The Poisson process model for jumps is that the probability of one jump in the interval is plus higher order terms. h could be a constant, a deterministic function of time, or a stochastic process. The survival probability is the probability that no jump has occurred in the interval . The change in the survival probability is

So

Let

be a discontinuous stochastic process. Write

for the value of S as we approach t as a result of a jump. Then

from the left. Write

for the non-infinitesimal change in

Let z be the magnitude of the jump and let magnitude of the jump is

be the distribution of z. The expected

11

Define

, a compensated process and martingale, as

Then

Consider a function

of jump process

. If

jumps by

then

jumps . The

by . is drawn from distribution jump part of is

which may depend on

, dg and

If contains drift, diffusion and jump parts, then It's Lemma for

is

It's lemma for a process which is the sum of a drift-diffusion process and a jump process is just the sum of the It's lemma for the individual parts.

[edit] Non-continuous semimartingales


It's lemma can also be applied to general d-dimensional semimartingales, which need not be continuous. In general, a semimartingale is a cdlg process, and an additional term needs to be added to the formula to ensure that the jumps of the process are correctly given by It's lemma. For any cadlag process Yt, the left limit in t is denoted by Yt-, which is a left-continuous process. The jumps are written as Yt = Yt - Yt-. Then, It's lemma states that if X = (X1,X2,,Xd) is a ddimensional semimartingale and f is a twice continuously differentiable real valued function on Rd then f(X) is a semimartingale, and

This differs from the formula for continuous semimartingales by the additional term summing over the jumps of X, which ensures that the jump of the right hand side at time t is f(Xt).

12

[edit] Informal derivation


A formal proof of the lemma requires us to take the limit of a sequence of random variables, which is not done here. Instead, we give a sketch of how one can derive It's lemma by expanding a Taylor series and applying the rules of stochastic calculus. Assume the It process is in the form of

Expanding f(x, t) in a Taylor series in x and t we have

and substituting a dt + b dB for dx gives

In the limit as dt tends to 0, the dt2 and dt dB terms disappear but the dB2 term tends to dt. The latter can be shown if we prove that since Deleting the dt2 and dt dB terms, substituting dt for dB2, and collecting the dt and dB terms, we obtain

as required. The formal proof is somewhat technical and is beyond the current state of this article.

[edit] Examples
[edit] Geometric Brownian motion
A process S is said to follow a geometric Brownian motion with volatility and drift if it satisfies the stochastic differential equation dS = S(dB + dt), for a Brownian motion B. Applying It's lemma with f(S) = log(S) gives

13

It follows that log(St) = log(S0) + Bt + ( 2/2)t, and exponentiating gives the expression for S,

[edit] The Dolans exponential


The Dolans exponential (or stochastic exponential) of a continuous semimartingale X can be defined as the solution to the SDE dY = Y dX with initial condition Y0 = 1. It is sometimes denoted by (X). Applying It's lemma with f(Y) = log(Y) gives

Exponentiating gives the solution

[edit] BlackScholes formula


It's lemma can be used to derive the BlackScholes formula for an option. Suppose a stock price follows a Geometric Brownian motion given by the stochastic differential equation dS = S(dB + dt). Then, if the value of an option at time t is f(t,St), It's lemma gives

The term (f/S) dS represents the change in value in time dt of the trading strategy consisting of holding an amount f/S of the stock. If this trading strategy is followed, and any cash held is assumed to grow at the risk free rate r, then the total value V of this portfolio satisfies the SDE

This strategy replicates the option if V = f(t,S). Combining these equations gives the celebrated Black-Scholes equation

14

Geometric Brownian motion


From Wikipedia, the free encyclopedia Jump to: navigation, search

Two sample paths of Geometric Brownian motion, with different parameters. A geometric Brownian motion (GBM) (also known as exponential Brownian motion) is a continuous-time stochastic process in which the logarithm of the randomly varying quantity follows a Brownian motion,[1] also called a Wiener process. It is used in mathematical finance to model stock prices in the BlackScholes model.

Contents
[hide]

1 Technical definition 2 Properties of GBM 3 Use of GBM in finance 4 Extensions of GBM 5 See also 6 References 7 External links

[edit] Technical definition


A stochastic process St is said to follow a GBM if it satisfies the following stochastic differential equation (SDE):

where is a Wiener process or Brownian motion and ('the percentage drift') and ('the percentage volatility') are constants.

[edit] Properties of GBM


For an arbitrary initial value S0 the above SDE has the analytic solution (under It's interpretation):

15

which is (for any value of t) a log-normally distributed random variable with expected value and variance given by[2]

that is the probability density function of a St is:

The correctness of this solution can be checked using It's lemma. When deriving further properties of GBM, use can be made of the SDE of which GBM is the solution, or the explicit solution given above can be used. For example, consider the stochastic process log(St). This is an interesting process, because in the BlackScholes model it is related to the log return of the stock price. Using It's lemma with f(S) = log(S) gives

It follows that

This result can also be derived by applying the logarithm to the explicit solution of GBM:

Taking the expectation yields the same result as above: .

Stochastic differential equation


From Wikipedia, the free encyclopedia Jump to: navigation, search 16

A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process,resulting in a solution which is itself a stochastic process. SDE are used to model diverse phenomena such as fluctuating stock prices or physical system subject to thermal fluctuations. Typically, SDEs incorporate white noise which can be thought of as the derivative of Brownian motion (or the Wiener process); however, it should be mentioned that other types of random fluctuations are possible, such as jump processes.

Contents
[hide]

1 Background o 1.1 Terminology o 1.2 Stochastic Calculus o 1.3 Numerical Solutions 2 Use in Physics o 2.1 Note on "the Langevin equation" 3 Use in probability and mathematical finance 4 Existence and uniqueness of solutions 5 See also 6 References

[edit] Background
The earliest work on SDEs was done to describe Brownian motion in Einstein's famous paper, and at the same time by Smoluchowski. However, one of the earlier works related to Brownian motion is credited to Bachelier (1900) in his thesis 'Theory of Speculation'. This work was followed upon by Langevin. Later It and Stratonovich put SDEs on more solid mathematical footing.

[edit] Terminology
In physical science, SDEs are usually written as Langevin equations. These are sometimes confusingly called "the Langevin equation" even though there are many possible forms. These consist of an ordinary differential equation containing a deterministic part and an additional random white noise term. A second form is the Smoluchowski equation and, more generally, the Fokker-Planck equation. These are partial differential equations that describe the time evolution of probability distribution functions. The third form is the stochastic differential equation that is used most frequently in mathematics and quantitative finance (see below). This is similar to the Langevin form, but it is usually written in differential form. SDEs come in two varieties, corresponding to two versions of stochastic calculus.

[edit] Stochastic Calculus


Brownian motion or the Wiener process was discovered to be exceptionally complex mathematically. The Wiener process is nowhere differentiable; thus, it requires its own rules of calculus. There are two dominating versions of stochastic calculus, the Ito stochastic calculus and the Stratonovich stochastic calculus. Each of the two has advantages and disadvantages, 17

and newcomers are often confused whether the one is more appropriate than the other in a given situation. Guidelines exist (e.g. ksendal, 2003) and conveniently, one can readily convert an Ito SDE to an equivalent Stratonovich SDE and back again. Still, one must be careful which calculus to use when the SDE is initially written down.

[edit] Numerical Solutions


Numerical solution of stochastic differential equations and especially stochastic partial differential equations is a young field relatively speaking. Almost all algorithms that are used for the solution of ordinary differential equations will work very poorly for SDEs, having very poor numerical convergence. A textbook describing many different algorithms is Kloeden & Platen (1995). Methods include the EulerMaruyama method, Milstein method and RungeKutta method (SDE).

[edit] Use in Physics


In physics, SDEs are typically written in the Langevin form and referred to as "the Langevin equation." For example, a general coupled set of first-order SDEs is often written in the form:

where is the set of unknowns, the and are arbitrary functions and the are random functions of time, often referred to as "noise terms". This form is usually usable because there are standard techniques for transforming higher-order equations into several coupled first-order equations by introducing new unknowns. If the are constants, the system is said to be subject to additive noise, otherwise it is said to be subject to multiplicative noise. This term is somewhat misleading as it has come to mean the general case even though it appears to imply the limited case where : . Additive noise is the simpler of the two cases; in that situation the correct solution can often be found using ordinary calculus and in particular the ordinary chain rule of calculus. However, in the case of multiplicative noise, the Langevin equation is not a well-defined entity on its own, and it must be specified whether the Langevin equation should be interpreted as an Ito SDE or a Stratonovich SDE. In physics, the main method of solution is to find the probability distribution function as a function of time using the equivalent Fokker-Planck equation (FPE). The Fokker-Planck equation is a deterministic partial differential equation. It tells how the probability distribution function evolves in time similarly to how the Schrdinger equation gives the time evolution of the quantum wave function or the diffusion equation gives the time evolution of chemical concentration. Alternatively numerical solutions can be obtained by Monte Carlo simulation. Other techniques include the path integration that draws on the analogy between statistical physics and quantum mechanics (for example, the Fokker-Planck equation can be transformed into the Schrdinger equation by rescaling a few variables) or by writing down ordinary differential equations for the statistical moments of the probability distribution function.[citation
needed]

18

[edit] Note on "the Langevin equation"


The "the" in "the Langevin equation" is somewhat ungrammatical nomenclature. Each individual physical model has its own Langevin equation. Perhaps, "a Langevin equation" or "the associated Langevin equation" would conform better with common English usage.

[edit] Use in probability and mathematical finance


The notation used in probability theory (and in many applications of probability theory, for instance mathematical finance) is slightly different. This notation makes the exotic nature of the random function of time in the physics formulation more explicit. It is also the notation used in publications on numerical methods for solving stochastic differential equations. In strict mathematical terms, can not be chosen as a usual function, but only as a generalized function. The mathematical formulation treats this complication with less ambiguity than the physics formulation. A typical equation is of the form

where denotes a Wiener process (Standard Brownian motion). This equation should be interpreted as an informal way of expressing the corresponding integral equation

The equation above characterizes the behavior of the continuous time stochastic process Xt as the sum of an ordinary Lebesgue integral and an It integral. A heuristic (but very helpful) interpretation of the stochastic differential equation is that in a small time interval of length the stochastic process Xt changes its value by an amount that is normally distributed with expectation (Xt, t) and variance (Xt, t) and is independent of the past behavior of the process. This is so because the increments of a Wiener process are independent and normally distributed. The function is referred to as the drift coefficient, while is called the diffusion coefficient. The stochastic process Xt is called a diffusion process, and is usually a Markov process. The formal interpretation of an SDE is given in terms of what constitutes a solution to the SDE. There are two main definitions of a solution to an SDE, a strong solution and a weak solution. Both require the existence of a process Xt that solves the integral equation version of the SDE. The difference between the two lies in the underlying probability space ( F, Pr). A weak solution consists of a probability space and a process that satisfies the integral equation, while a strong solution is a process that satisfies the equation and is defined on a given probability space. An important example is the equation for geometric Brownian motion

19

which is the equation for the dynamics of the price of a stock in the Black Scholes options pricing model of financial mathematics. There are also more general stochastic differential equations where the coefficients and depend not only on the present value of the process Xt, but also on previous values of the process and possibly on present or previous values of other processes too. In that case the solution process, X, is not a Markov process, and it is called an It process and not a diffusion process. When the coefficients depends only on present and past values of X, the defining equation is called a stochastic delay differential equation.

[edit] Existence and uniqueness of solutions


As with deterministic ordinary and partial differential equations, it is important to know whether a given SDE has a solution, and whether or not it is unique. The following is a typical existence and uniqueness theorem for It SDEs taking values in n-dimensional Euclidean space Rn and driven by an m-dimensional Brownian motion B; the proof may be found in ksendal (2003, 5.2). Let T > 0, and let

be measurable functions for which there exist constants C and D such that

for all t [0, T] and all x and y Rn, where

Let Z be a random variable that is independent of the -algebra generated by Bs, s 0, and with finite second moment:

Then the stochastic differential equation/initial value problem

has a Pr-almost surely unique t-continuous solution (t, ) | Xt() such that X is adapted to the filtration FtZ generated by Z and Bs, s t, and

20

21

You might also like