You are on page 1of 31

See

discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/222423430

Estimation of Value-at-Risk by extreme value


and conventional methods: A comparative
evaluation of their predictive...

Article in Journal of International Financial Markets Institutions and Money July 2005
DOI: 10.1016/j.intfin.2004.05.002 Source: RePEc

CITATIONS READS

43 109

2 authors:

Stelios Bekiros Dimitris A. Georgoutsos


European University Institute / IPAG / AUEB Athens University of Economics and Business
56 PUBLICATIONS 479 CITATIONS 44 PUBLICATIONS 324 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Dimitris A. Georgoutsos on 04 July 2017.

The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document
and are linked to publications on ResearchGate, letting you access and read them immediately.
" Estimation of Value-at-Risk by extreme value and
conventional methods: a comparative evaluation of their
predictive performance"

by

Stelios Bekiros & Dimitris Georgoutsos*

Department of Accounting and Finance


Athens University of Economics and Business
76 Patission str.,
104 34 Athens, GREECE

March 2003
This version: October 2003

Abstract

This paper conducts a comparative evaluation of the predictive performance of various Value-
at- Risk (VaR) models. Special emphasis is paid on two methodologies related to the Extreme
Value Theory (EVT): the Peaks over Threshold (POT) and the Blocks Maxima (BM). Both
estimation techniques are based on limit results for the excess distribution over high
thresholds and block maxima respectively. They are applied on, USD denominated, daily
returns of the Dow Jones Industrial Average (DJIA) and the Cyprus Stock exchange (CSE)
indices with the intension to compare the performance of the various estimation techniques
on markets with different capitalization and trading practices. The sample extends over the
period 11/21/1997 4/19/2002 while the sub-period 4/12/2001-4/19/2002 has been reserved
for backtesting purposes. The results we report reinforce previous ones according to which
some traditional methods might yield similar results at conventional confidence levels but at
very high ones the EVT methodology produces the most accurate forecasts of extreme losses.

JEL classification: C51; G21.

Keywords: Value-at-Risk; Extreme Value Theory; Backtesting; Risk Management.

* Corresponding author. E-mail address: d.georgoutsos@aueb.gr


1. Introduction

Over the last fifteen years the financial world has witnessed the bankruptcy, or near

bankruptcy, of several institutions that incurred huge losses due to their exposures to

unforeseen market moves. In the wake of these financial disasters, it has become clear to

both risk managers and policy makers that the development of better measures of the market

risk is of paramount importance to the financial industry. At the same time the amendment of

the Basle Accord in 1997 allowed banks to use internal market risk management models in

order to fulfill the requirements on capital adequacy. These developments have given an

impetus to the Value-at-Risk (VaR) methodology where VaR is defined as the minimum

amount of losses on a trading portfolio over a fixed length of time with a certain probability.

As it is implied by this definition a good VaR estimate is highly dependent on a robust model

for the lower tail of the profits and losses (P&L) distribution and it represents a lower quantile

of that distribution that is only exceeded on a small proportion of occasions. Although the

VaR measure is widely used since it serves the need to have an accurate risk measure that is

easily understood by non-technical parties, there exists a controversy surrounding the

suitability of the various techniques that have been suggested for estimating it.

The most commonly used approaches in estimating the P&L distribution are based on

either a non-parametric historical simulation (HS) method or a fully parametric one that

combines an econometric model for volatility dynamics with the assumption of conditional

normality for the returns series (Variance-Covariance, VC, method). This last method

encompasses applications such as J.P.Morgans Riskmetrics (RM) and most models of the

GARCH family. The HS approach has been criticized for its inappropriateness in providing

extreme quantiles since extrapolation beyond past observations is impossible. Addressing this

problem by extending the sample gives inaccurate quantile estimates since low and high

volatility periods are mixed together. On the other hand, the main drawback of the

parametric approach is its failure to account for the most well known feature of the

distribution of returns: the heavy tails. We can compensate for the much too fast decay of

the tails in the Gaussian distribution by introducing GARCH type models of the volatility but

2
this leads to satisfactory quantile estimates once a disaster has already hit the system. This

means that the GARCH type models are suitable in signaling the continuation of a high-risk

regime but what we are actually after is a model that predicts the occurrence of extreme

events.

Danielsson et. al., (1998), suggest that a good VaR model should correctly represent

the likelihood of extreme events by providing smooth tail estimates which extend beyond the

historical sample. Therefore, recent research on VaR is focusing on modeling event risks and

many authors have suggested the use of statistical techniques developed for analyzing

extreme realizations of random variables. The Extreme Value Theory, (EVT), relies on

extreme observations to derive the distribution of the tails of a random variable. By doing so,

risk is forecasted more efficiently than by modeling the entire distribution of the random

variable itself. Danielsson and de Vries (1997) compare the predictive performance of various

VaR methods for simulated portfolios of seven US stocks. The results show that EVT is

particularly accurate as tails become more extreme whereas the VC and the HS methods

under- and over-predict losses respectively. Similar results have been obtained by Longin

(2000) who applied the EVT to compute the VaR of single and bivariate portfolio positions. At

low probability levels the VC, HS, and EVT methods estimate similar VaRs but at more

conservative levels the accuracy of the EVT methods is superior. Longin (2000) concludes

that with the EVT the model risk is considerably reduced since it does not assume a particular

model for returns but lets the data speak for themselves. Moreover, he notices that VaRs

based on the conditional GARCH processes reflect the degree of volatility at the chosen time

of VaR estimation and are subject to the event risk due to unexpected changes in market

conditions. Danielsson and Morimoto (2000) have confirmed, on Japanese financial data, the

accuracy and stability of the EVT risk forecast over the GARCH- type techniques. In contrast

to the previous evidence, Lee and Saltoglu (2003) employ various loss functions and show

that the predictive performance of the EVT models is less than satisfactory for five Asian

stock market indices. Traditional methods that combine GARCH models with Students-t or

even normal distributions have a more consistent performance although none of the methods

used in that paper produced a uniformly superior risk forecast independently of the countries,

3
the periods and the alternative loss functions that were employed. Kiesel et. al. (2001)

estimated one day holding period VaRs for benchmark emerging markets bond returns by

applying a method suggested by McNeil and Frey (2000) that accounts for heteroscedasticity

in the data. They show that for confidence levels commonly used in market risk applications

the EVT methods yield VAR estimates similar to those derived from the empirical distribution.

However, the superiority of the EVT methodology at very high confidence levels is questioned

since it produced a clustering of cases where losses exceeded the estimated VaRs when

volatility was clustering as well.

In the present paper, we perform an evaluation of the predictive performance of the

most popular VaR models with an emphasis on the EVT methodology. The models are back-

tested for their out-of-sample predictive ability by using Christoffersens (1998) likelihood

ratio tests for coverage probability. The data set used throughout this paper consists of daily

returns on two indices: the Dow Jones Industrial Average (DJIA) and the Cyprus Stock

Exchange (CSE) index. The sample period is 11/21/1997 4/19/2002 and it has been split

into an in-sample period, 11/21/1997 4/11/2001, and an out-of-sample period, 4/12/2001

4/19/2002. The return series of the second index has been converted into US dollar terms,

denoted as $CSE, since we intend to compare the perceived degree of risk from the viewpoint

of an international investor. More generally, we would like to compare the results of a mature

capital market to those derived from an emerging one that is characterized by higher volatility

and liquidity crashes as a result of the flight of capital to more stable markets at the event of

a crisis.

The paper is structured as follows. In section 2 we present the main theoretical

results of EVT under two alternative groups of models, the BM and the POT, and it is shown

how VaR estimates can be derived. In section 3 the VaR estimates are reported, the forecast

evaluation criterion is presented and the competing models are classified on the basis of their

predictive performance. Section 4 concludes while in an appendix a short presentation of the

conventional VaR estimation techniques is provided.

4
2. Value-at-Risk models and the extreme value theory

The EVT-based methods for tail estimation are attractive because they rely on sound

statistical theory that offers a parametric form for the tail of a distribution. We consider two

alternative methods for generating extreme returns: the oldest Block Maxima (BM) and the

more modern Peaks over Threshold, (POT). According to the POT method we fix a high

threshold u and look at all exceedences y over u .1 If F (Y ) represents the distribution

function of returns Y , then the cumulative distribution of the u - exceedences, denoted by

Fu ( y ) , is defined by

F ( y + u ) F (u )
Fu ( y ) = P[Y y + u Y > u ] = ,x > 0, (1)
1 F (u )

where 0 y < x0 u and x0 is the (finite or infinite) right endpoint of F. Balkema de

Haan, 1974, and Pickands, 1975, studied the asymptotic behavior of threshold exceedences

and proved for a large class of the underlying distribution Fu ( y ) that its limiting distribution,

as the threshold is raised, is the Generalized Pareto Distribution (GPD) which is given by

G ( y ) = 1 {1 + y / } , 0
1 /
, (2)
G ( y ) = 1 exp( y ), = 0

where is the tail index, > 0 the scale parameter and the support is y 0 when >0

and 0 < y < ( / ) when < 0 . Essentially all the common continuous distributions of

statistics belong in this class of distributions. For example the case > 0 corresponds to

heavy tailed distributions such as the Pareto, Students et.cet. The case = 0

corresponds to distributions like the normal or the lognormal whose tails decay exponentially.

The short-tailed distributions with a finite endpoint such as the uniform or beta correspond to

the case < 0 .

We now discuss how the results of the last section can be used to estimate VaRs.

Let {yi }iN=1


u
be the sample of exceedences over threshold, u , with its size being N u . If we

assume that those N u excesses are i.i.d. with an exact GPD distribution then the maximum

5
likelihood estimates of the GPD parameters and are consistent and asymptotically normal

as N u provided that >-1/2 (Smith, 1987). 2,3


We define Fu ( y ) = 1 Fu ( y ) and then

by employing the identity shown in (1) above we have:

F (u + y ) = F (u ) Fu ( y ) . (3)

In (3) we substitute F (u + y ) = 1 p , the confidence level, F (u ) = N u / n , the proportion

of the data in the tail and Fu (VaR (1 p ) u ) = G (VaR (1 p ) u ) where G represents

the GPD with the parameters and substituted for the maximum likelihood estimates.

From (3) then we can estimate p -quantiles as

n(1 p)

VaR(1 p) = u + 1 . (4)
N u

Under the Block Maxima (BM) method the data are divided into m blocks with n

observations in each block corresponding to n trading intervals. Extremes are then defined

as the maximum (and minimum) of the n random variables Y1 , Y2 ,.....Yn and let

Z n = max(Y1 , Y2 ,...Yn ) denote the maximum over the n trading intervals. Fisher and

Tippett (1928) have shown that for returns Yt that are independent and drawn from the

same distribution F , if there exist real constants wn and q n such that

lim P{( Z n wn ) / qn y} = lim F n (qn y + wn ) = H ( y ) , (5)


n n

for some non-degenerate limit distribution H , then H must belong to the family of the

Generalized Extreme Value Distributions, GEV, i.e.,

exp((1 + y ) 1 / ) if 0
H ( y) = , y R (6)
exp(e y ) if = 0

6
where (1+y)>0, R. According to the tail index value, , three types of extreme value

distributions are distinguished: the Frchet distribution ( > 0 ), the Weibull distribution

( = 0 ), and the Gumbel distribution ( < 0 ). It is said that if the block maxima for F

converge in distribution to H , then F belongs to the maximum domain of attraction of H ,

i.e. F MDA( H ) . 4 Essentially all the common, continuous distributions of statistics are

in MDA ( H ) for some value of . Thus the normal distribution corresponds to the Gumbel

case, while the heavy tailed distributions typically encountered in finance are in the Frchet

domain of attraction. This class includes the Pareto, the Students-t and the general class of

stable distributions with characteristic exponent in (0,2).

We do not know the underlying distribution of our returns series but believe it to be

heavy-tailed so that the Frchet limit will be the relevant case. In accordance to the theorem

presented above we fit the GEV distribution H , , = H (Z n ) / [ ] on the standardized

data of block minima, ( Z n ) / . The location parameter and the positive scale

parameter take care of the unknown sequences of normalizing constants wn and q n . Let

us now define by Rn ,k a level that we expect to be exceeded in one n -block for every k n -

blocks, on average. If we believe that maxima in blocks of length n follow the generalized

extreme value distribution, then Rn ,k is a quantile of this distribution, that is a VaR estimate,

and from (6) we can derive

)
) )
Rn ,k = H ,1 , (1 1 / k ) = ) (1 ( log(1 1 / k )) ) , (7)

where , and have been substituted for their maximum likelihood estimates. If returns

Yi are independent then the following holds

(1 1 / k ) = P (Y1 Rn ,k ,...Yn Rn ,k ) = F n ( Rn ,k ) , (8)

7
so the (1 1 / k ) quantile, Rn ,k , for the distribution of Z n corresponds to the

(1 1 / k )1 / n quantile of the marginal distribution of Yi (Longin, 2000). Suppose for example

that we consider our model for annual (261days) maxima. Then, the return that we expect to

be exceeded once every 20 years, the 20-year return level, corresponds to the

(0.95)(1/261)=0.9998 quantile.

The results we presented above have been derived for the case of stationary, identically

and independently (i.i.d.) distributed random variables. It has been shown however that the

maxima of a process with dependence structure not too strong have the same limiting

distribution as if the process was independent. Therefore, the same location, , and scale,

, parameters can be chosen and the same limiting distribution, H ( y ) , given by equation

(6), applies. However, since the conditions that satisfy the above mentioned processes are

rather unrealistic for financial time series we extend the asymptotic properties of maxima

derived for an i.i.d. variable to the non-i.i.d. case (Leadbetter et al., 1983, Embrechts et al.,

1997). Let (Yn ) be a stationary variable with marginal distribution F , where

~
Z n = max(Y1 ,..., Yn ) , and (Yn ) an associated independent process with the same marginal

~ ~ ~
distribution F and let Z n = max(Y1 ,.., Yn ) . The extremal index, for large n , is defined as a

real number 0 1 such that

~
P{Z n Rn ,k } P {Z n Rn ,k } = F n ( Rn ,k ) . (9)

Under this definition the maximum of n observations from the non-i.i.d. series behaves like

the maximum of n observations from the associated i.i.d. variable.5 It can be also shown


that the maxima Z n of a non-i.i.d. series converge in probability to H ( y ) and from

1 / n
equation (8) that the VAR estimate is given by Rn ,k = Y(11 / k ) (Longin, 2000, McNeil,

1998). A natural asymptotic estimator of is

ln (1 K u / m )
= n 1 , (10)
ln (1 N u /(m n) )

8
where N u is the number of exceedences of the threshold u and K u is the number of blocks

in which the threshold is exceeded (Embrechts et al., 1997). The asymptotic derivation of the

previous equation suggests that we should attempt to keep both m and n large (McNeil,

1998).

3. VaR estimation and backtesting analysis

We implement the various VaR estimation techniques on daily returns of the Dow Jones

Industrial average (DJIA) and the, converted in US dollars, Cyprus Stock Exchange ($CSE)

indices. The series cover the period 11/21/1997 4/11/2001. The period 4/12/2001

4/19/2002 has been reserved for backtesting the predictive performance of the alternative

models6.

In order to estimate the threshold, u , for the POT method we follow Neftci (2000)

according to whom u = 1.176 .


n n
is the standard deviation of (Yt )tn=1 and

1.176 = Ft 1 (0.10) = 1.44 ( 2)/ when a Student-t (=6) distribution, F , is being

assumed. This implies that the excesses over the threshold belong to the 10% tails and in our

case they have been estimated to be 0.0143 and 0.0274 (in absolute values) for the DJIA and

the $CSE indices respectively. The choice of the optimal threshold is a delicate issue since it is

confronted with a bias-variance tradeoff. If we choose too low a threshold we might get

biased estimates because the limit theorems do not apply any more while high thresholds

generate estimates with high standard errors due to the limited number of observations.

An alternative procedure to determine the appropriate threshold would be to use the

plot of the sample mean excess function (MEF) that is defined by

(Y u )
n +
i
sn (u ) = i =1
. (11)
1{ }
n
i =1 Yi > u

This function expresses the sum of the excesses over the threshold, u , divided by the

number of data points that exceed it. Based on the picture of the MEF, if it exhibits an

upward trend we can infer a heavy tailed behavior for the data whereas a short tailed

9
distribution would show a downward trend and exponentially distributed data would give an

approximately horizontal line. If the empirical plot appears to follow a reasonably straight line

with a positive gradient, then this is an indication that the data follow a Generalized Pareto

distribution with a positive shape parameter in the tail area above u (Embrechts et. al.,

1997). The plots of the MEF, above the estimated by the Nefti procedure thresholds, suggest

that the exponential function would suit to the DJIA dataset (figure 1) while the GPD might

provide a reasonable fit to the $CSE (figure 2). 7 Hereafter, we apply the GPD to both cases

and check its appropriateness by its overall fit in the tail area.

The absolute values of the (negative) daily returns of the DJIA and the $CSE that

exceed the chosen thresholds have been used to estimate the GPD (eq.2) where the pth

quantile VaR is being calculated from equation (4). The crucial parameter is the tail index, ,

where in general terms the higher its value, the heavier the tail and the higher the quantile

estimates we derive. Consequently, we have derived VaR estimates corresponding to eight

different levels of significance (95%, 96%, 97%, 98%, 99%, 99.5%, 99.9% 99.95%,

99.99%). The results are presented in Table 1 and the main picture that emerges is that the

estimated VaRs were substantially higher for the $CSE index due the higher estimated value

of (0.0879 compared to 0.06 of the DJIA). Figures 3 and 4 depict the fit of the estimated

GPD function for the DJIA and the $CSE cases respectively. Although the evidence from the

sample MEF suggested that we might not have been successful in fitting the GPD on both

datasets, the fit of this distribution on the exceedences seems reasonable to the naked eye.

This fit is further investigated by using the crude residuals

1 (Yi u ) +
Wi = log 1 , (12)
+ (u )

as defined by McNeil And Saladin (1998). These should be i.i.d. unit exponentially distributed

and this hypothesis can be checked by using, among other techniques, the QQ plots of the

quantiles of the exponential distribution against those of the empirical one. Figures 5 and 6

present the diagrams for the two cases and it appears that the excesses over the threshold

are adequately modeled by the GPD functions since the points lie approximately along a

straight line8.

10
For the Blocks method we analyzed monthly and quarterly minima and the fit of the

Frchet distribution to these block minima was found to be adequate. For each block

maximum the corresponding crude residual is defined to be

Wi = [1 + ( / )( i )]
(1 / )
. (13)

According to Cox and Snell (1968) these should be i.i.d. unit exponentially distributed and this

hypothesis can be checked using graphical diagnostics such as the QQ plots of the

exponential distribution against the residuals of the fitted model (figures 7, 8).

In order to interpret correctly the evidence on the return level Rn ,k , that is exceeded

in one n -block out of every k , we should know the estimate of the extremal index, . This
is the case because if the series tends to form clusters then we may know the frequency of

stress periods, i.e. the periods when we experience an exceedence of the VaR level Rn ,k , but

we do not know how many of them occur in each particular n -block. We have estimated,

from eq. (10), the extremal index using monthly and quarterly blocks and the estimated

values are reported in table 2. They have been estimated for various thresholds that are

exceeded by between 15 and 150 observations (McNeil, 1998). For example, we observe that

with quarterly blocks, in the DJIA index case, the estimates range between 0.60 and 0.86

and thus we consider the quarter- to be their average value of 0.71. Based on this average

estimate we can derive a value for the average cluster size (1/ ) of about 1.41.

The estimated parameters of the GEV distribution, the calculated quantiles at various

confidence levels as well as their respective confidence intervals (95%) are reported in table

3. The estimated parameters for the DJIA index are 0.14 and 0.22 for monthly and

quarterly minima respectively while their confidence intervals encompass the estimate, 0.059,

of the tail index of the GPD model. Hence, the two models give results that are consistent

with one another. On the basis of the evidence provided by the estimated confidence

intervals we cannot reject the hypothesis (<0) of a thin-tailed marginal distribution, at the

95% level, for both frequencies. Furthermore, we can safely reject the hypothesis of an

infinite variance (>1/2) on monthly data. The calculated VaRs from eq. (7) admit the

following interpretation. Based on the average estimate of , in table 2, we can deduce for

11
example that the VaR=-4.05% estimate for the DJIA at the 99.5% level, with monthly blocks,

will be exceeded once every 13 months. Similarly, the VaR=-7.79% estimate for DJIA at the

99.95% level under quarterly blocks implies that it will be exceeded once every 47 quarters.

The estimates for the $CSE are less satisfactory since the tail index, , is statistically

compatible with thin-tailed distributed extreme returns on quarterly data, while with monthly

data an infinite variance can not be excluded. With respect to the VaR estimates we note that

they are considerably higher when compared to their counterparts of the DJIA index. This is

what we expected considering the substantially higher volatility of the Cyprus stock exchange

over the period we examined.

The VaR estimates, on 4/12/2001, for all the methods implemented, including the

EVT ones, are presented in table 4 whereas their out-of-sample performance is evaluated in

tables 5 and 6. This evaluation is based on one-step-ahead forecasts that have been

produced from a series of rolling samples with a size equal to that of the initial sample

(11/21/1997 4/11/2001). Figures 9 and 10 offer a visual presentation of the estimated

VaRs (99%) with three representative models; the GARCH(1,1), the RM(0.94) and the Blocks

Maxima with quarterly blocks. The main features that are worth mentioning are that the

extreme value estimates are generally higher and they are considerably less volatile than the

other two. The rolling samples do not generate substantial change of the data set of extreme

observations and as a result the EVT VaR estimates are almost time independent. On the

other hand, in the GARCH type models variances are forecasted by an exponential model

with declining weights on past observations and therefore are crucially dependent on the last

observation that is added in the sample. The policy suggestion of the above evidence is that

the EVT models are more suitable for long-run forecasts of the maximum potential losses

rather than being a day-to-day tool to measure the market risk.

Various methods and tests have been suggested for evaluating VaR model accuracy.

In this paper we implement Christoffersens (1998) likelihood ratio tests for coverage

probability. The first one tests whether the probability of the unconditional coverage failure,

a * , is equal to the level a selected for the VaR calculation. Thus, the relevant null

12
hypothesis is N0: a = a against the alternative N0: a a . As the number of exceptions, x ,
* *

follows a binomial distribution the likelihood ratio test statistic is:

LRuc = 2 ln[(1 a ) n x a x ] + 2 ln[(1 a * ) n x (a * ) x ]



d
2 (1) , (14)

*
where the maximum likelihood estimator of a is ( x / n) . The second test checks whether

deviations are time dependent and the null hypothesis is that the probability of an exception

occurring is independent on what happened the day before. LRind tests for the serial

independence of the exceptions and it is defined as:

LRind = 2 ln[(1 a * ) (T00 +T10 ) (a * ) (T01 +T11 ) ] + 2 ln[1 a 0 ) T00 a 0 01 (1 a1 ) T10 a1 11 ]


T T

d
2 (1) .

(15)

Tij measures the number of days in which state j occurred while it was at i the day before

and a j denotes the probability of observing an exception conditional on state j the previous

day. If we assign the indicators (i , j) to 0 if VaR is not exceeded and to 1 otherwise then the

maximum likelihood estimates of a 0 and a1 are given by (T01 /(T00 + T01 )) and

(T11 / T10 + T11 )) respectively. Thus, if the occurrence of an exception is independent of

previous days conditions then a = a 0 = a1 = (T01 + T11 ) / T and LRind should not be

statistically significant. By combining the two tests, a third test of conditional coverage,

LRcc , can be constructed as

LRcc = LRuc + LRind



d
2 (2) . (16)

In table 5 we present the test statistics for both the unconditional and the conditional

out-of-sample performance of various models. The main evidence from this back-testing

exercise is that the models perform equally well at low confidence levels (i.e. up to 98%).

However, from the 99% level and beyond the superiority of the extreme values techniques

clearly emerges since they are the only methods, along with the Historical Simulation

13
techniques, where not a single case exists with statistically significant forecasting failures.

Looking at the two indices separately, the variance-covariance method performs equally well

with the DJIA index, when GARCH models estimate the volatility parameter (GARCH(GED),

GARCH(N), GARCH(t) and RM(0.94)). This picture however is not maintained when the $CSE

index is being examined. In this case, the forecasting ability of the EVT methods is

impeccable. In table 6 we present the number of exceedences in each case and compare

them with an interval of numbers that would be consistent with the probability level under

which the VaR estimates have been produced. Those intervals have been derived for the

LRuc case only since we failed to reject the independence hypothesis for the exceedences in

every single case. Again, we reconfirm for both indices the previous results where at high

confidence levels the EVT methods are the best performers along with the Historical

simulation method. The GARCH models for the DJIA index have also recorded a similar

success.

4. Concluding Remarks

In the present study we attempted a comparative study of the predictive ability of

VaR estimates obtained from various estimation techniques. The main emphasis has been

given to the Extreme Value methodology because it is based on sound statistical theory and it

is directly related to the measurement of extreme events, as this is evident from its

widespread acceptance in many other areas dealing with catastrophic events (e.g. insurance).

The results we reported provided evidence according to which at low confidence

levels most of the examined methods exhibited satisfactory performance. However, at

confidence levels in excess of 99.5% the superiority of the EVT clearly emerges. The GARCH

models have performed equally well, and somewhat to our surprise, so does the Historical

Simulation technique. It should be stressed here that the sample period covers both bull

and bear market conditions where the backtesting period is clearly characterized by the

latter one. Generally, the conditional VaR estimates (e.g. GARCH, RM) vary a lot more than

the unconditional ones (EVT) since their estimates increase during high volatility periods and

decrease during low volatility ones. The occurrence of an extreme return immediately

14
influences conditional estimates, whereas the evidence from the EVT methods is almost

unaffected. Furthermore, the EVT models relate to the probability of experiencing a large loss

over a long investment horizon that embraces different volatility environments. On the other

hand, the conventional estimators provide information about the risk of large losses over a

short-term horizon characterized by a specific level of volatility.

To summarize, for routine confidence levels such as 90%, 95% and perhaps even

99%, conventional methods may be sufficient. At higher confidence levels however the

normal distribution underestimates potential losses. While the historical simulation method

provides an improvement, it still suffers from lack of data in the tails that makes it difficult to

estimate VaR reliably. As Jorion (2000) claims, the EVT applies smooth curves through the

extreme tails of the distribution (99,90%, 99,95% et.cet.) and it provides not only a unique

VaR estimate for the selected confidence level but its related confidence interval as well.

15
References

Balkema, A. and L. de Haan, 1974, Residual life time at great age, Annals of Probability, 2,

792-804.

Christoffersen, P.F., 1998, Evaluating Interval Forecasts, International Economic Review,

39, 841-864.

Cox, D. and E. Snell, 1968, A general definition of residuals (with discussion), Journal of the

Royal Statistical Society, Series B, 30, 248-275.

Danielsson, J., and C. de Vries, 1997, Value-at-Risk and extreme returns, Disc. Paper no

273, LSE, Financial Markets Group.

Danielsson, J., P. Hartmann & C. de Vries, 1998, The Cost of Conservatism, Risk, 11(1),

101-103

Danielsson, J., and Y.Morimoto, 2000, Forecasting Extreme Financial Risk: A Critical Analysis

of Practical Methods for the Japanese Market, Disc. Paper no 2000-E-8, Institute for

Monetary and Economic Studies, Bank of Japan.

Embrechts, P., Kluppelberg, C. and T Mikosch, 1997, Modelling extremal events for

Insurance and Finance, Springer, Berlin.

Fisher, R. and L. Tippet, 1928, Limiting forms of the frequency distribution of the largest or

smallest member of a sample, Proceedings of the Cambridge Philosophical Society, 24,

180-190.

Gnedenko, B., 1943, Sur la distribution limite du terme maximum dune srie alatoire,

Annals of Mathematics, 44, 423-453.

Hosking, J., 1985, Maximum-likelihood estimation of the parameters of the generalized

extreme-value distribution, Applied Statistics, 34, 301-310.

Jorion, P., 2000, Value at Risk, 2nd ed., McGraw Hill, New York.

Kiesel, R., W. Perraudin and A. Taylor, 2000, An Extreme analysis of VaRs for emerging

market benchmark bonds, working paper, Birkbeck College.

Lauridsen, S., 2001, Estimation of Value at Risk by Extreme Value Methods, Extremes, pp.

107-144.

16
Leadbetter, M., G. Lindgren and H. Rootzen, 1983, Extremes and related properties of

random sequences and processes, Springer, Berlin.

Lee T-H. and B. Saltoglu, 2001, Evaluating Predictive Performance of Value-at-Risk Models in

Emerging Markets: A Reality Check, mimeo, Marmara University.

Longin, F., 2000, From Value at Risk to Stress Testing: The Extreme Value Approach,

Journal of Banking and Finance, 24, 1097-1130.

McNeil, A., 1998, Calculating Quantile Risk Measures for Financial Return Series using

Extreme Value Theory, Working Paper, ETH Zrich, Switzerland.

McNeil, A. and T. Saladin, 2000, Developing Scenarios for Future Extreme Losses Using the

POT Model in P. Embrechts (ed), Exrtremes and Integrated Risk Management, RISK

publications.

McNeil, A., and R. Frey, 2000, Estimation of Tail-Related Risk Measures for Heteroscedastic

Financial Time Series: An Extreme Value Approach, Working Paper, ETH Zrich,

Switzerland.

Nefti, S. (2000), Value at Risk Calculations, Extreme Events, and Tail Estimation, Journal

of Derivatives, spring, 23-38

Pickands, J., 1975, Statistical inference using extreme order statistics, The Annals of

Statistics, 3, 119-131.

Smith, R.L., 1987, Estimating Tails of Probability Distributions, The Annals of Statistics, 15,

1174-1207.

Xiang, X., 1996, A Kernel estimator of conditional Quantiles Journal of Multivariate Analysis,

59, 206-216.

17
Footnotes

The authors are grateful to seminar participants at the Athens University of Economics and
Business, the University of Cyprus and an anonymous referee for helpful comments on an
earlier draft. The second author acknowledges support from the Cyprus Research Promotion
Foundation (25/2002). The usual disclaimer applies.
1
The discussion in the text applies to the upper tail that is relevant in the case of short
positions. However, if we transform the returns series Yt into Yt then the results for the
minimum can be directly deduced form those of the maximum and the whole discussion
applies to the lower tails as well, (Longin, 2000).
2
Smith, 1987, has also obtained asymptotic normality results for the estimates of and
under the weaker assumption that Fu ( y ) is only approximately GPD .
3
The tail index can alternatively be estimated by means of semi-parametric estimation
techniques such as the Hills estimator. This approach has certain disadvantages when it is
compared to the fully parametric approach based on the GPD of the excess losses (McNeil,
1998).
4
F MDA(H) {1- F ( y ) } y 1 / L( y ) for a slowly varying function of L , (Gnedenko,
1943). This result effectively implies that if the tail of the distribution function F ( y ) decays
like a power function then the distribution is in the domain of attraction of the Frchet, which
is a heavy tailed distribution.
5
can be interpreted as the reciprocal of the mean cluster size and n as counting the
number of pseudo-independent clusters in n observations.
6
The series have been tested for stationarity and the hypothesis for the presence of a unit
root cannot be accepted.
7
The increase in the variability of the MEFs at the higher threshold levels is characteristic of
the technique and it is due to the sparseness of the data in that range.
8
The usual caveats about the QQ models should be mentioned. Even data generated from an
exponential distribution might sometimes show departures from a typical exponential
behavior. In general, the more data we have the clearer the conclusion from the QQ plots.

18
Appendix A: Conventional methods of VaR estimation

Let (Yt )t =1 represent identically and independently distributed, i.i.d., daily returns of a
n

financial asset price. We can then define the Value-at-Risk, VaRt (a ) , as the conditional on

the information set, I t 1 , th quantile, i.e

Pr (Yt VaRt (a ) / I t 1 ) = a . (A1)

Suppose now that (Yt )t =1 follows the stochastic process


n

Yt = t + et = t + t t (A2)

where t = E (Yt / I t 1 ), t2 = E (et2 / I t 1 ) and t = ( et / t ) has the conditional

distribution function t () . Under the parametric approach we consider specific distributions

for t () such as the Gaussian N(0,1), the Student-t and the Generalized Error Eistribution

(GED). Therefore, VaRt (a ) is estimated by inverting the distribution function

VaRt (a ) = t + t1 (a) t . (A3)

The conditional variance can be estimated by various volatility models such as the moving

average, MA, variance or alternatively by one of the family of GARCH models. In particular,

the standard GARCH(1,1) model is given by:

t2 = 0 + 1 (Yt 1 t ) 2 + t21 , (A4)

t = (1 / T )i =1 Yt i
T
where . As a special case we will be concerned with the Exponentially

Weighted Moving Average (EWMA) specification, adopted by the Riskmetrics (RM) model of

J.P.Morgan, under which

t2 = t21 + (1 )(Yt 1 t ) 2 . (A5)

Riskmetrics has chosen =0.94 and =0.97 as the optimal decay factor for daily and monthly

data respectively.

19
The second methodology we explored was based on nonparametric approaches to

model the distribution of returns. Historical simulation makes use of the empirical quantiles of

returns to estimate VaRs for a given confidence level. The critical assumption behind this

approach is that the historical distribution of returns will remain the same over the next

periods. Also, a feature of this method is that extrapolation beyond past observations is

impossible and therefore extreme quantiles cannot be estimated. If a long sample is chosen

instead, then the method is unable to distinguish between high and low volatility periods and

as a result it might generate inaccurate estimates.

With the Monte Carlo, MC, simulation method one simulates the future value of an

asset based on an underlying stochastic process. In this paper the Brownian motion process,

in its discrete time version, has been adopted for the asset price S, that is

( )
S t / S t 1 = Yt = t t t + t 1 t t , (A6)

where t, is a zero mean, unit variance error term, and t and t are drift and volatility

parameters. In the empirical investigation we have allowed for a possible time-varying

volatility structure where the volatility coefficient is being estimated by GARCH-normal,

GARCH-t, GARCH-GED, RM(0.94) and MA(60).

20
TABLE 1: VaR estimates in absolute values- for the DJIA and $CSE indices (11/21/1997-
4/11/2001). Method of estimation: Peaks over Threshold (eq. 4). Parameter values: (DJIA),
u=1.43%, =0.0592, =0.0078 , ($CSE), u=2.74%, =0.0879, =0.0144.

p VaR

DJIA $CSE
95% 1.99% 3.22%

96% 2.18% 3.55%


97% 2.42% 3.99%
98% 2.77% 4.64%
99% 3.37% 5.8%
99.5% 4.0% 7.01%
99.9% 5.58% 10.16%
99.95% 6.31% 11.66%
99.99% 8.12% 15.5%

21
TABLE 2: Estimates of the Extremal Index, , using the Blocks Minima method for the DJIA
and $CSE indices (11/21/1997-4/11/2001). Method of estimation (eq. 10).

Case 1: DJIA index

Average
(m, n) Nu 15 20 25 30 40 50 100 150

(42, 20) Ku 11 15 16 19 23 24 34 40
Month 0.78 0.91 0.79 0.85 0.83 0.65 0.62 0.79 0.78
(14, 60 ) Ku 7 10 10 11 12 13 - -
Quarter 0.6 0.86 0.75 0.66 0.73 0.66 - - 0.71

Case 2: $CSE index

Average
(m, n) Nu 15 20 25 30 40 50 100 150

(42, 20) Ku 11 12 15 18 20 22 28 40
month 0.9 0.69 0.73 0.76 0.66 0.61 0.45 0.53 0.66
(14, 60 ) Ku 7 7 8 8 9 9 11 13
Quarter 0.73 0.5 0.49 0.42 0.32 0.27 0.21 0.18 0.39

Notation: m, n stand for the number of blocks and days in each block respectively. Nu, Ku,
stand for the number of exceedences of the threshold u and the number of blocks this
threshold has been exceeded.

22
TABLE 3: VaR estimates in absolute values- and their 95% confidence intervals for the DJIA
and $CSE indices (11/21/1997-4/11/2001). Method of estimation: Block Minima (eq. 6 & 9).

CASE 1: DJIA index

Variables Month Quarter


n 20 60
m 42 14
0.138 (-0.131, 0.406) 0.219 (-0.21, 0.657)
0.0079 (0.0057, 0.01) 0.0081 (0.0042, 0.012)
0.0177 (0.015, 0.02) 0.0262 (0.0213, 0.0311)
=1.81 =1.12
Rn,k (VaR 95%)
VaR=1.86% (1.84,1.88) VaR=2.13% (2.11, 2.15)
=2.12 =1.20
Rn,k (VaR 96%)
VaR=2.05% (2.03, 2.07) VaR=2.29% (2.27,2.31)
=2.64 =1.37
Rn,k (VaR 97%)
VaR=2.3% (2.28, 2.32) VaR=2.52% (2.50,2.54)
=3.69 =1.73
Rn,k (VaR 98%)
VaR=2.66% (2.64, 2.68) VaR=2.86% (2.84,2.88)
=6.89 =2.87
Rn,k (VaR 99%)
VaR=3.32% (3.30,3.34) VaR=3.51% (3.49, 3.53)
=13.29 =5.20
Rn,k (VaR 99.5%)
VaR=4.05% (4.03,4.07) VaR=4.27% (4.25,4.29)
=64.57 =24.0
Rn,k (VaR 99.9%)
VaR=6.04% (6.02,6.06) VaR=6.54% (6.52,6.56)
=128.6 =47.4
Rn,k (VaR 99.95%)
VaR=7.0% (6.98, 7.02) VaR=7.79% (7.77,7.81)
=641 =235
Rn,k (VaR 99.99%)
VaR=9.76% (9.74, 9.78) VaR=11.55% (11.53, 11.57)

CASE 2: $CSE index


Variables Month Quarter
n 20 60
m 42 14
0.233 (-0.137, 0.602) -0.134(-0.669, 0.401)
0.0163 (0.0112, 0.0214) 0.0252 (0.0134, 0.0369)
0.0232 (0.0171, 0.0292) 0.0438 (0.0281, 0.0594)
=2.03 =1.43
Rn,k (VaR 95%)
VaR=3.13% (3.10, 3.16) VaR=3.89% (3.84, 3.94)
=2.40 =1.62
Rn,k (VaR 96%)
VaR=3.56% (3.53, 3.59) VaR=4.47% (4.42, 4.52)
=3.02 =1.96
Rn,k (VaR 97%)
VaR=4.14% (4.11, 4.17) VaR=5.19% (5.14, 5.24)
=4.27 =2.65
Rn,k (VaR 98%)
VaR=5.03% (5.00, 5.06) VaR=6.16% (6.11, 6.21)
=8.05 =4.77
Rn,k (VaR 99%)
VaR=6.74% (6.71, 6.77) VaR=7.68% (7.63, 7.73)
=15.62 =9.03
Rn,k (VaR 99.5%)
VaR=8.75% (8.72, 8.78) VaR=9.06% (9.01, 9.11)
=76.22 =43.21
Rn,k (VaR 99.9%)
VaR=14.87% (14.84, 14.90) VaR=11.81% (11.76, 11.86)
=152 =85.94
Rn,k (VaR 99.95%)
VaR=18.3% (18.27, 18.33) VaR=12.82% (12.77, 12.87)
=758 =427
Rn,k (VaR 99.99%)
VaR=28.7% (28.67, 28.73) VaR=14.83% (14.78, 14.88)

Notation: n and m stand for the number of days in each block and the number of blocks in
our sample respectively. The estimated VaR will be exceeded once every K blocks. Confidence
intervals have been estimated using the asymptotic 2 distribution of the LR test (Lauridsen,
2001).

23
TABLE 4: VaR (%) estimates in absolute values- for the DJIA and $CSE indices on 4/12/2001. Various methods of estimation.

Index DJIA $CSE DJIA $CSE DJIA $CSE DJIA $CSE DJIA $CSE DJIA $CSE DJIA $CSE DJIA $CSE DJIA $CSE
Percentage 95% 96% 97% 98% 99% 99.5% 99.9% 99.95% 99.99%
RM(0.94) 3.02 3.25 3.21 3.46 3.45 3.72 3.77 4.06 4.27 4.59 4.72 5.09 5.67 6.1 6.04 6.5 6.82 7.35
MA(20) 3.11 3.26 3.31 3.47 3.56 3.73 3.89 4.07 4.41 4.61 4.88 5.1 5.86 6.12 6.23 6.52 7.05 7.37
MA(60) 2.43 3.17 2.59 3.38 2.78 3.63 3.04 3.96 3.45 4.49 3.82 4.97 4.58 5.96 4.87 6.35 5.51 7.18

GARCH(GED) 3.0 3.15 3.19 3.35 3.43 3.6 3.75 3.93 4.25 4.45 4.7 4.92 5.64 5.91 6.0 6.3 6.79 7.11

GARCH(N) 3.03 3.18 3.22 3.39 3.47 3.64 3.78 3.97 4.29 4.5 4.75 4.99 5.7 5.99 6.06 6.37 6.85 7.2

GARCH(t) 2.96 3.0 3.15 3.2 3.39 3.43 3.7 3.75 4.19 4.25 4.64 4.7 5.57 5.64 5.93 6.01 6.7 6.78

HS 2.06 3.25 2.21 3.53 2.37 3.99 2.64 4.52 3.19 5.89 3.74 6.55 5.94 10.47 6.26 10.75 6.51 10.97

MC-RM(0.94) 3.026 3.26 3.19 3.47 3.3 3.58 3.64 3.85 4.31 4.5 4.69 4.85 5.5 5.13 6.21 5.17 6.77 5.21

MC-MA(60) 2.53 3.3 2.7 3.54 2.83 3.72 3.17 4.05 3.58 4.29 3.93 4.92 4.9 5.85 5.05 6.0 5.17 6.16

MC-GARCH(GED) 3.05 2.99 3.3 3.2 3.44 3.5 3.6 3.83 4.32 4.53 4.8 5.0 5.17 6.47 5.37 6.76 5.52 6.99

MC-GARCH(N) 3.18 3.34 3.3 3.44 3.54 3.71 3.82 4.03 4.27 4.74 4.62 5.06 5.42 6.45 5.67 6.79 5.88 7.06

MC-GARCH(t) 3.0 3.01 3.27 3.2 3.65 3.4 3.93 3.65 4.4 4.37 4.57 4.82 4.88 5.5 5.35 5.64 5.72 5.75

EVT-POT 1.99 3.22 2.18 3.55 2.42 3.99 2.77 4.64 3.37 5.8 4.0 7.01 5.58 10.16 6.31 11.66 8.12 15.5

EVT-BM(60) 2.13 3.89 2.29 4.47 2.52 5.19 2.86 6.16 3.51 7.68 4.27 9.06 6.54 11.81 7.79 12.82 11.5 14.83

Notation: RM=Riskmetrics, MA(60)=60 days moving average, GED=Generalized Error Distribution, MC=Monte-Carlo,
HS=Historical Simulation, POT=Peaks-over-Threshold, BM(60)=60 days Blocks

24
TABLE 5: Likelihood Ratio tests statistics for the unconditional LRun and conditional LRcc out-of-sample performance of
various models as well as of the serial independence of the exceptions LRind . Backtesting period: 4/12/2001-4/19/2002.

% 0.95 0.96 0.97 0.98 0.99


Model
Test LRuc LRind LRcc LRuc LRind LRcc LRuc LRind LRcc LRuc LRind LRcc LRuc LRind LRcc

$CSE 0.8096 0.0178 0.8274 0.7388 0.0029 0.7417 1.3357 0.0045 1.3402 2.4849 0.0058 2.4907 5.3163 0.0067 5.3231
RM(0.94)

DJIA 0.0051 0.0029 0.0081 0.0041 0.0052 0.0093 0.6800 0.0052 0.6852 2.4849 0.0000 2.4822 1.8573 0.0000 1.8573

$CSE 1.3562 0.0168 1.3730 3.9587 0.0168 3.9755 5.7209 0.0010 5.7220 8.7808 0.0029 8.7838 9.9666 0.0058 9.9723
MA(20)

DOW 0.1252 0.0020 0.1272 0.0638 0.0045 0.0683 0.2327 0.0000 0.2327 2.4849 0.0027 2.4822 5.3163 0.0000 5.3163

$CSE 0.3965 0.0187 0.4152 0.3137 0.0210 0.3347 0.2327 0.0058 0.2385 1.4370 0.0000 1.4370 7.5121 0.0063 7.5184
MA(60)

DJIA 0.0051 0.0029 0.0081 0.3137 0.0038 0.3175 0.2327 0.0058 0.2385 0.1535 0.0000 0.1535 0.7100 0.0000 0.7100

$CSE 0.8096 0.0267 0.8363 2.0646 0.0187 2.0833 2.1818 0.0210 2.2028 3.7636 0.0052 3.7688 3.4154 0.0071 3.4224
GARCH(GED)

DJIA 0.0473 0.0038 0.0511 0.1529 0.0000 0.1529 0.0163 0.0021 0.0142 1.4370 0.0000 1.4370 0.7100 0.0000 0.7100

$CSE 1.3562 0.0257 1.3819 1.3260 0.0196 1.3456 3.2029 0.0203 3.2232 2.4849 0.0058 2.4907 3.4154 0.0071 3.4224
GARCH(N)

DJIA 0.0473 0.0038 0.0511 0.1529 0.0000 0.1529 0.0163 0.0000 0.0163 1.4370 0.0000 1.4370 0.7100 0.0000 0.7100

$CSE 0.3965 0.0187 0.4152 0.3137 0.0124 0.3261 0.0163 0.0063 0.0225 1.4370 0.0063 1.4433 1.8573 0.0000 1.8573
GARCH(t)

DJIA 0.0473 0.0038 0.0511 0.1529 0.0000 0.1529 0.0163 0.0000 0.0163 1.4370 0.0000 1.4370 0.7100 0.0000 0.7100

$CSE 0.2646 0.0045 0.2691 0.1529 0.0058 0.1587 1.0756 0.0074 1.0830 0.2613 0.0076 0.2689 0.1294 0.0079 0.1373
HS

DJIA 2.1351 0.0000 2.1351 2.1041 0.0000 2.1041 1.0756 0.0000 1.0756 1.0338 0.0000 1.0338 0.1294 0.0000 0.1294

$CSE 0.8096 0.0178 0.8274 0.7388 0.0029 0.7417 2.1818 0.0038 2.1855 2.4849 0.0058 2.4907 5.3163 0.0067 5.3231
MC-RM(0.94)

DJIA 0.0051 0.0029 0.0081 0.0638 0.0045 0.0683 0.6800 0.0052 0.6852 1.4370 0.0000 1.4370 1.8573 0.0000 1.8573

$CSE 0.1252 0.0196 0.1447 0.0041 0.0052 0.0093 0.6800 0.0052 0.6852 2.4849 0.0058 2.4907 7.5121 0.0063 7.5184
MC-MA(60)

DJIA 0.1252 0.0020 0.1272 0.0638 0.0045 0.0683 0.2327 0.0058 0.2385 0.1535 0.0000 0.1535 0.7100 0.0000 0.7100

$CSE 0.8096 0.0178 0.8274 0.7388 0.0203 0.7591 1.3357 0.0216 1.3573 3.7636 0.0052 3.7688 7.5121 0.0063 7.5184
MC-GARCH(GED)

DJIA 0.0473 0.0038 0.0511 0.5326 0.0000 0.5326 0.0163 0.0000 0.0163 1.4370 0.0000 1.4370 0.7100 0.0000 0.7100

$CSE 0.8096 0.0178 0.8274 0.7388 0.0203 0.7591 3.2029 0.0203 3.2232 5.2508 0.0216 5.2724 3.4154 0.0071 3.4224
MC-GARCH(N)

DJIA 0.2646 0.0045 0.2691 0.1529 0.0000 0.1529 0.0163 0.0000 0.0163 1.4370 0.0000 1.4370 0.7100 0.0000 0.7100

$CSE 0.8096 0.0267 0.8363 0.7388 0.0116 0.7504 0.0163 0.0063 0.0225 1.4370 0.0063 1.4433 1.8573 0.0074 1.8647
MC-GARCH(t)

DJIA 0.2646 0.0045 0.2691 0.1529 0.0000 0.1529 0.2327 0.0000 0.2327 1.4370 0.0000 1.4370 0.7100 0.0000 0.7100

$CSE 0.2646 0.0045 0.2691 0.5326 0.0063 0.5388 1.0756 0.0074 1.0830 0.2613 0.0076 0.2689 0.1294 0.0079 0.1373
EVT-POT

DJIA 1.2882 -0.0027 1.2855 1.1710 0.0000 1.1710 1.0756 0.0000 1.0756 1.0338 0.0003 1.0338 0.1294 0.0000 0.1294

$CSE 6.3844 0.0074 6.3918 5.0670 0.0076 5.0746 2.1663 0.0076 2.1739 2.4939 0.0079 2.5018 1.2373 0.0000 1.2373
EVT-BM(60)

DJIA 0.2646 0.0045 0.2691 0.1529 0.0058 0.1587 1.0756 0.0000 1.0756 0.2613 0.0000 0.2613 0.1294 0.0000 0.1294
TABLE 5: (continued)

% 0.995 0.999 0.9995 0.9999


Model
Test LRuc LRind LRcc LRuc LRind LRcc LRuc LRind LRcc LRuc LRind LRcc

$CSE 12.5217 0.0067 12.5284 9.3303 0.0000 9.3303 2.3773 0.0000 2.3772 0.0510 0.0000 0.0510
RM(0.94)

DJIA 3.7262 0.0000 3.7262 1.2452 0.0000 1.2451 2.3773 0.0000 2.3772 5.3929 0.0000 5.3929

$CSE 16.1138 0.0063 16.1200 9.3303 0.0000 9.3303 13.2370 0.0000 13.2370 5.3929 0.0000 5.3929
MA(20)

DJIA 6.2699 0.0000 6.2699 4.7605 0.0000 4.7605 2.3773 0.0000 2.3772 5.3929 0.0000 5.3929

$CSE 3.7262 0.0076 3.7338 9.3303 0.0078 9.3381 2.3773 0.0000 2.3772 5.3929 0.0000 5.3929
MA(60)

DJIA 1.6958 0.0000 1.6958 4.7605 0.0000 4.7605 7.2799 0.0000 7.2799 13.5152 0.0000 13.5152

$CSE 3.7262 0.0000 3.7262 9.3303 0.0000 9.3303 7.2799 0.0000 7.2799 0.0510 0.0000 0.0510
GARCH(GED)

DJIA 0.3529 0.0000 0.3529 1.2452 0.0000 1.2451 2.3773 0.0000 2.3772 0.0510 0.0000 0.0510

$CSE 1.6958 0.0000 1.6958 9.3303 0.0000 9.3303 7.2799 0.0000 7.2799 0.0510 0.0000 0.0510
GARCH(N)

DJIA 0.3529 0.0000 0.3529 1.2452 0.0000 1.2451 2.3773 0.0000 2.3772 0.0510 0.0000 0.0510

$CSE 3.7262 0.0000 3.7262 9.3303 0.0000 9.3303 7.2799 0.0000 7.2799 0.0510 0.0000 0.0510
GARCH(t)

DJIA 0.3529 0.0000 0.3529 1.2452 0.0000 1.2451 2.3773 0.0000 2.3772 0.0510 0.0000 0.0510

$CSE 0.0644 0.0000 0.0644 0.5103 0.0000 0.5103 0.2551 0.0000 0.2551 0.0510 0.0000 0.0510
HS

DJIA 0.3529 0.0000 0.3529 0.5103 0.0000 0.5103 0.2551 0.0000 0.2551 0.0510 0.0000 0.0510

$CSE 9.2243 0.0071 9.2314 14.5876 0.0000 14.5876 19.8816 0.0000 19.8816 32.5562 0.0000 32.5562
MC-RM(0.94)

DJIA 3.7262 0.0000 3.7262 9.3303 0.0000 9.3303 2.3773 0.0000 2.3772 5.3929 0.0000 5.3929

$CSE 9.2243 0.0071 9.2314 9.3303 0.0078 9.3381 7.2799 0.0079 7.2878 5.3929 0.0000 5.3929
MC-MA(60)

DJIA 1.6958 0.0000 1.6958 4.7605 0.0000 4.7605 7.2799 0.0000 7.2799 13.5152 0.0000 13.5152

MC- $CSE 3.7262 0.0000 3.7262 9.3303 0.0000 9.3303 13.2370 0.0000 13.2370 13.5152 0.0000 13.5152
GARCH(GED)
DJIA 0.3529 0.0000 0.3529 1.2452 0.0000 1.2451 2.3773 0.0000 2.3772 5.3929 0.0000 5.3929

MC- $CSE 9.2243 0.0071 9.2314 4.7605 0.0000 4.7605 7.2799 0.0000 7.2799 5.3929 0.0000 5.3929
GARCH(N)
DJIA 1.6958 0.0000 1.6958 4.7605 0.0000 4.7605 7.2799 0.0000 7.2799 5.3929 0.0000 5.3929

$CSE 3.7262 0.0000 3.7262 9.3303 0.0000 9.3303 13.2370 0.0000 13.2370 22.6920 0.0000 22.6920
MC-GARCH(t)

DJIA 0.3529 0.0000 0.3529 4.7605 0.0000 4.7605 2.3773 0.0000 2.3772 5.3929 0.0000 5.3929

$CSE 0.0644 0.0000 0.0644 0.5103 0.0000 0.5103 0.2551 0.0000 0.2551 0.0510 0.0000 0.0510
EVT-POT

DJIA 0.3529 0.0000 0.3529 0.5103 0.0000 0.5103 0.2551 0.0000 0.2551 0.0510 0.0000 0.0510

$CSE 0.0644 0.0000 0.0644 0.5103 0.0000 0.5103 0.2551 0.0000 0.2551 0.0510 0.0000 0.0510
EVT-BM(60)

DJIA 0.3529 0.0000 0.3529 0.5103 0.0000 0.5103 0.2551 0.0000 0.2551 0.0510 0.0000 0.0510
Notes: Bold typed numbers indicate significance at the 95% level. LRuc (LRind ) and LRcc are 2 (1) and
2 (2) distributed respectively.

26
TABLE 6: Number of exceedences, F, and 95% LRun non-rejection confidence regions for the DJIA and the $CSE indices (eq. 14).
Backtesting sample period: 4/12/2001-4/19/2002 (255 observations).

Index DJIA $CSE DJIA $CSE DJIA $CSE DJIA $CSE DJIA $CSE DJIA $CSE DJIA $CSE DJIA $CSE DJIA $CSE
Percentage 95% 96% 97% 98% 99% 99.5% 99.9% 99.95% 99.99%
Failures (LRuc) 6<F<21 4<F<17 2<F<14 1<F<11 F<7 F<5 F<2 F<2 F<1
RM(0.94) 13 16 10 13 10 11 9 9 5 7 4 7 1 3 1 1 1 0
MA(20) 14 17 11 17 9 15 9 13 7 9 5 8 2 3 1 3 1 1
MA(60) 13 15 12 12 9 9 6 8 4 8 3 4 2 3 2 1 2 1

GARCH(GED) 12 16 9 15 8 12 8 10 4 6 2 4 1 3 1 2 0 0

GARCH(N) 12 17 9 14 8 13 8 9 4 6 2 3 1 3 1 2 0 0

GARCH(t) 12 15 9 12 8 8 8 8 4 5 2 4 1 3 1 2 0 0

HS 8 11 6 9 5 5 3 4 2 2 2 1 0 0 0 0 0 0

MC-RM(0.94) 13 16 11 13 10 12 8 9 5 7 4 6 3 4 1 4 1 4

MC-MA(60) 14 14 11 10 9 10 6 9 4 8 3 6 2 3 2 2 2 1
MC- 12 16 8 13 8 11 8 10 4 8 2 4 1 3 1 3 1 2
GARCH(GED)
MC- 11 16 9 13 8 13 8 11 4 6 3 6 2 2 2 2 1 1
GARCH(N)
MC- 11 16 9 13 9 8 8 8 4 5 2 4 2 3 1 3 1 3
GARCH(t)
EVT-POT 9 11 7 8 5 5 3 4 2 2 2 1 0 0 0 0 0 0

EVT-BM(60) 11 5 9 4 5 4 4 2 2 1 2 1 0 0 0 0 0 0

Notes: Numbers in italics indicate, statistically significant, overestimation or underestimation of risk. F is the number of failures that could be obserned without rejecting the null that the models
are correctly calibrated at the 95% level of confidence.

27
Figure 1: Sample mean excess function for the DJIA index (11/21/1997-4/11/2001) Figure 3: Fit of the estimated Generalized Pareto function for DJIA

1.0

0.014
0.8
0.012

0.6
Mean Excess

Fu(y-u)

0.4
0.010

0.2
0.008

0.0
0.0 0.01 0.02 0.03 0.04

Threshold 0.05 0.10

y (log scale)

Figure 2: Sample mean excess function for the $CSE index (11/21/1997-4/11/2001) Figure 4: Fit of the estimated Generalized Pareto function for $CSE
0.026

1.0
0.024

0.8
0.022

0.6
Mean Excess

Fu(y-u)
0.020

0.4
0.018

0.2
0.016
0.014

0.0

0.03 0.04 0.05 0.06 0.07 0.08 0.10


0.0 0.02 0.04 0.06
y (log scale)
Threshold

28
Figure 5: QQ Plot of residuals from the fitted GPD against the exponential distribution (DJIA) Figure 7: QQ Plot of residuals from the fitted GEV against the exponential distribution (DJIA)

4
4

Exponential Quantiles
3
Exponential Quantiles
3

2
2

1
1
0

0
0 1 2 3 4 5
0 1 2 3 4
Ordered Data
Ordered Data

Figure 6: QQ Plot of residuals from the fitted GPD against the exponential distribution ($CSE) Figure 8: QQ Plot of residuals from the fitted GEV against the exponential distribution ($CSE)

4
4

Exponential Quantiles
3
Exponential Quantiles
3

2
2

1
1
0

0
0 1 2 3 4
0 1 2 3 4
Ordered Data
Ordered Data

29
View publication stats

Figure 9: DJIA Returns, negated and VaR (99%) Estimates (4/12/2001 -


4/19/2002)
0.06

0.04

0.02

0
1 51 101 151 201 251

-0.02

-0.04

-0.06

Returns GARCH(1,1) RM(0.94) EVT-BM(60)

Figure 10: $CSE Returns, negated, and VaR (99%) Estimates (4/12/2001 -
0.12 4/19/2002)

0.1
0.08
0.06
0.04
0.02
0
-0.02 1 51 101 151 201 251
-0.04
-0.06

Returns GARCH(1,1) RM(0.94) EVT-BM(60)

30

You might also like