You are on page 1of 7

Cholesky Decomposition

It is not too hard to simultaneously simulate (to model) random variables. In Excel, for example, we can use use =NORMSINV(RAND()) to create standard random normal variables. The RAND() function is a uniform distribution bounded by {0,1}. The NORMSINV() translates the random number into the zvalue that corresponds to the probability given by a cumulative distribution. For example, =NORMSINV(5%) returns -1.645 because 5% of the area under a normal curve lies to the left of - 1.645 standard deviations. But no realistic asset or portfolio contains only one risk factor. To model several risk factors, we could simply generate multiple random variables. Put more technically, the realistic modeling scenario is a multivariate distribution function that models multiple random variables. But the problem with this approach, if we just stop there, is that correlations are not included. What we really want to do is simulate random variables but in such a way that we capture or reflect the correlations between the variables. In short, we want random but correlated variables. The typical way to incorporate the correlation structure is by way of a Cholesky decomposition (or factorization). For FRM candidates, Jorion briefly touches on the Cholesky factorization in the 4th Edition FRM Handbook (pages 99 to 100); but, if you are not familiar with matrix math, this may not be a sufficient introduction. In the EditGrid spreadsheet below, I performed a Cholesky decomposition for a simple three-asset case. This can be viewed separately or opened into a new sheet, if you would like to edit yourself. Please note: the decomposition below is not the endgame. It is a step along the way. It produces, for us to use, a matrix that can be used to produce returns that are random but correlated. The sheet below has four small sections, each step is numbered in green. The lower triangle (LU) is the result of the Cholesky Decomposition. It is the thing we can use to simulate random variables, that itself is "informed" by our covariance matrix.

The covariance matrix. This contains the implied correlation structure; in fact, a covariance matrix can itself be decomposed into a correlation matrix and a volatility vector. The covariance matrix(R) will be decomposed into a lower-triangle matrix (L) and an uppertriangle matrix (U). Note they are mirrors of each other. Both have identical diagonals; their zero elements and nonzero elements are merely "flipped"

Given that R = LU, we can solve for all of the matrix elements: a,b,c (the diagonal) and x, y, z. Note that is by definition. That's what a Cholesky decomposition is, it is the solution the produces two triangular matrices whose product is the original (covariance) matrix. (Note, if you play with the variables it is possible to produce an error: the matrix must be 'positive definite' and so not all matrices can be decomposed this way).

4. Given the solution for the matrix elements, then I calculated the product of the triangle matrix to ensure the produce does equal the original covariance matrix (i.e., does LU = R?). Note, in Excel a single array formula can be used with = MMULT(); in EditGrid, it is just a set of MMULT() formulas.

C:\Users\Am ith\ Downloads\cholesky v1.xls

Cholesky Matrix
Cholesky algorithm Cholesky factorization Cholesky matrix indefinite matrix negative definite matrix negative semidefinte matrix

If we think of matrices as multi-dimensional generalizations of numbers, we may draw useful analogies between numbers and matrices. Not least of these is an analogy between positive numbers and positive definite matrices. Just as we can take square roots of positive numbers, so can we take "square roots" of positive definite matrices.
Positive Definite Matrices

A symmetric matrix x is said to be: positive definite if > 0 for all row vectors b 0; positive semidefinite if 0 for all row vectors b; negative definite if < 0 for all row vectors b 0; negative semidefinite if 0 for all row vectors b; indefinite if none of the above hold. These definitions may seem abstruse, but they lead to an intuitively appealing result. A symmetric matrix x is: positive definite if all its eigenvalues are real and positive;

positive semidefinite if all its eigenvalues are real and nonnegative; negative definite if all its eigenvalues are real and negative; negative semidefinite if all its eigenvalues are real and nonpositive; indefinite if none of the above hold. It is useful to think of positive definite matrices as analogous to positive numbers and positive semidefinite matrices as analogous to nonnegative numbers. The essential difference between semidefinite matrices and their definite analogues is that the former can be singular whereas the latter cannot. This follows because a matrix is singular if and only if it has a 0 eigenvalue.
Matrix "Square Roots"

Nonnegative numbers have real square roots. Negative numbers do not. An analogous result holds for matrices. Any positive semidefinite matrix h can be factored in the form for some real square matrix k, which we may think of as a matrix square root of h. The matrix k is not unique, so multiple factorizations of a given matrix h are possible. This is analogous to the fact that square roots of positive numbers are not unique either. If h is nonsingular (positive definite), k will be nonsingular. If h is singular, k will be singular.
Cholesky Factorization

A particularly easy factorization to perform is one known as the Cholesky factorization. Any positive semidefinite matrix has a factorization of the form where g is a lower triangular matrix. Solving for g is straightforward. Suppose we wish to factor the positive definite matrix
[1 ]

A Cholesky factorization takes the form


[2 ]

By inspection, have , we conclude

Also by inspection, Since we already . Proceeding in this manner, we obtain a matrix g in 6 steps:

Our Cholesky matrix is


[3 ]

The above example illustrates a Cholesky algorithm, which generalizes for higher dimensional matrices. Our algorithm entails two types of calculations: 1. Calculating diagonal elements (steps 1, 4, and 6) entails taking a square root.

2. Calculating off-diagonal elements , i > j, (steps 2, 3, and 5) entails dividing some number by the last-calculated diagonal element. For a positive definite matrix h, all diagonal elements will be nonzero. Solving for each entails taking the square root of a nonnegative number. We may take either the positive or negative root. Standard practice is to take only positive roots. Defined in this manner, the Cholesky matrix of a positive definite matrix is unique. The same algorithm applies for singular positive semidefinite matrices h, but the result is not generally called a Cholesky matrix. This is just an issue of terminology. When the algorithm is applied to the singular h, at least one diagonal elements equals 0. If only the last diagonal element equals 0, we can obtain g as we did in our example. If some other diagonal element equals 0, off-diagonal element will be indeterminate. We can set such indeterminate values equal to any value within an interval [a, a], for some a 0. Consider the matrix
[4 ]

Performing the first four steps of our algorithm above, we obtain


[5 ]

In the fifth step, we multiply the second row of g by the third column of g' to obtain
[6 ]

We already know

, so we have
[7 ] [8 ]

which provides us with no means of determining . It is indeterminate, so we set it equal to a variable x and proceed with the algorithm. We obtain

[9 ]

For the element to be real, we can set x equal to any value in the interval [3,3]. The interval of acceptable values for indeterminate components will vary, but it will always include 0. For this reason, it is standard practice to set all indeterminate values equal to 0. With this selection, we obtain
[10 ]

We can leave g in this form, or we can delete the second column, which contains only 0's. The resulting 3 2 matrix provides a valid factorization of h since
[11 ]

If a matrix h is not positive semidefinite, our Cholesky algorithm will, at some point, attempt to take a square root of a negative number and fail. Accordingly, the Cholesky algorithm is a means of testing if a matrix is positive semidefinite.
Computational Issues

In exact arithmetic, the Cholesky algorithm will run to completion with all diagonal elements > 0 if and only if the matrixh is positive definite. It will run to completion with all diagonal elements 0 and at least one diagonal element = 0 if and only if the matrix h is singular positive semidefinite. Things are more complicated if arithmetic is performed with rounding, as is done on a computer. Off-diagonal elements are obtained by dividing by diagonal elements. If a diagonal element is close to 0, any roundoff error may be magnified in such a division. For example, if a diagonal element should be .00000001, but roundoff error causes it to be calculated as . 00000002, division by this number will yield an off-diagonal element that is half of what it should be. An algorithm is said to be unstable if roundoff error can be magnified in this way or if it can cause the algorithm to fail. The Cholesky algorithm is unstable for singular positive semidefinite matrices h. It is also unstable for positive definite matrices h that have one or more eigenvalues close to 0.

You might also like