You are on page 1of 2

Maximum Likelihood Estimation

Maximum Likelihood estimation produces the same estimates as OLS, in a large


sample, so in general the results can be considered in the same way as those produced
by OLS. For example suppose there is a random variable with an unknown mean and
variance, which follows the normal distribution, where the probability density
function of the normal distribution is:

x 2
1 1 / 2( )
f ( x) e
2
Where is the unknown mean of the random variable x and is its standard
deviation. This can then be used to produce the usual normal distribution with a peak
at , this can then be used to determine the probability density for different values of
x, given its mean value. Given a particular value for the mean, by calculating the
probability density for specific values such as x = 4 and x = 6, it is then possible to
calculate the likelihood function, which is simply the joint probability density for the
two observations. By varying the value for , it is possible to obtain a number of
likelihood functions corresponding to these values of . These can then be plotted, we
will chose to maximise this function. This can also be generalised to more than 2
observations, although for simplicity we tend to use the log likelihood or log of the
likelihood function.

Simple Regression Model

Given the following simple model, assuming the error term is normally distributed
with zero mean and standard deviation of , we can produce:

y t xt u t
u t y t xt
u 2
1 1 / 2( )
f (u ) e
2

The probability density can then be determined by substituting in the ut expression.


The log-likelihood function will then be given by:

1 1
log L n log( ) 2
[( y1 x1 ) 2 ... ( y n x n ) 2 ]
2 2

Then the values of and that maximise this function are the same as those obtained
by OLS. The estimate of is slightly different.

You do not need to be able to derive the above, this is simply background
information for the estimation of the ARCH models.

Autoregressive Conditional Heteroskedasticity (ARCH)


The ARCH effect is concerned with a relationship within the heteroskedasticity, often
termed serial correlation of the heteroskedasticity. It often becomes apparent when
there is bunching in the variance or volatility of a particular variable, producing a
pattern which is determined by some factor. Given that the volatility of financial
assets is used to represent their risk, it can be argued that the ARCH effect is
measuring the risk of an asset. Given the following model:

Yt 0 1 X t ut
ut ~ N (0, 0 1ut21 )

This suggests the error term is normally distributed with zero mean and conditional
variance depending on the squared error term lagged one time period. The conditional
variance is the variance given the values of the error term lagged once, twice etc:

t2 var(ut \ ut1,ut2 ....) E (ut2 \ ut1,ut2 )


Where t2 is the conditional variance of the error term. The ARCH effect is then
modelled by:
t2 0 1ut21

This is an ARCH(1) model as it contains only a single lag on the squared error term,
however it is possible to extend this to any number of lags, if there are q lags it is
termed an ARCH(q) model.

Testing for ARCH Effects

The test for an ARCH effect was devised originally by Engle (1982) and is similar to
the Lagrange Multiplier (LM) test for autocorrelation.

1) Run the regression of the model using Ordinary Least Squares (OLS) and
collect the residuals. Square the residuals.

2) Run the following secondary regression:

ut2 0 1ut21 2 ut22 ..... 3ut2 p vt

Where u is the residual from the initial regression and p lags are included in
this secondary regression. The appropriate number of lags can either be
determined by the span of the data (i.e. 4 for quarterly data) or by an
information criteria. Collect the R 2 statistic from this regression.

3) Compute the statistic T* R 2 , where T is the number of observations. It follows


a chi-squared distribution with p degrees of freedom. The null hypothesis is
that there is no ARCH effect present.

You might also like