Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Garch model and their Applications to the VaR

Similar presentations


Presentation on theme: "The Garch model and their Applications to the VaR"— Presentation transcript:

1 The Garch model and their Applications to the VaR
I beg your pardon for my poor English, for that reason if you want to make some questions I preferred you make a to write then down The purpose of these paper is the application of the Garch models to forecast the volatility and in consequence obtain a best approach for the estimation of VaR This work includes a presentation of the results with the use of Garch models Ricardo A. Tagliafichi

2 The presence of the volatility in the assets returns
Selection of a Portfolio with models as CAPM or APT The estimation of Value at Risk of a Portfolio We know the importance of the volatility in the assets returns. I define the volatility as the star of the random variables The use of the Capital Assets Price Model and Arbitrage Price Theory which use the matrix of variances and covariances, to obtain the best selection of Portfolio depends on the volatility forecast. The interval of confidence for the estimation of the VaR also includes the forecast of the volatility to obtain a good approach When we purchase a derivative, what we do purchase? We purchase implied volatility! With this method we can know a best measure of future volatility The estimations of derivatives primes

3 The classic hypothesis
The capital markets are perfect, and has rates in a continuous form defined by: Rt=Ln(Pt)-Ln(Pt-1) These returns are distributed identically and applying the Central Theorem of Limits the returns are n.i.d The classical hypothesis was attack before the black Friday in 1986, with many papers presented near the 1976 by Mandelbroot with the application of fractal geometry to the assets returns. Really the market doesn’t fulfill the requirements of behavior of a perfect market These returns Rt, Rt-1, Rt-2, Rt-2, , Rt-n,doesn't have any relationship among them, for this reason there is a presence of a Random Walk

4 The great questions as a result of the perfect markets and the random walk
rs = 0 The propositions of a perfect market said 1)  a return of today doesn’t have any relation with the returns of the past days, for that reason the returns are considered an independent variable 2) all the returns are identically distributed and in consequence we say that they are idd 3)  By the hypotesis of the central limit theorem in order of the previous consideration the returns are normal and identically distributed, for that reason we annualize the volatility with this way sn = st (n/t) 0.5

5 The periodic structure of the volatility Merval Index
2 4 8 16 32 64 128 256 Difference between and If we consider the traditional way to annualize a volatility and we compare with the results of the real volatility estimated for different periods, with a superposition of observations, for example in the study of 5 periods we take the returns using the relation between P5….P1, P6…..P2 etc, we have differences as we can observe in the slide. In consequence x is the speed of the increment of the volatility

6 The memory of a process: The Hurst exponent
Is a number related with the probability that an event is autocorrelated The coefficient or exponent of Hurst is a number that this related with the probability that an event this auto correlated That is to say that there are events that condition the appearance or event of another similar event in some moment. This coefficient is born of the observations that was carry out by engineer Hurst analyzing the currents of the Nile River for the location of Assuan dam. He found that some overflows were followed by other overflows, for that he builds this coefficient starting from the following equation

7 The meaning of H 0.50 < H < 1 imply that the series is persistent, and a series is persistent when is characterized by a long memory of its process 0 < H < 0.50 mean that the series is antipersistent. The series reverses itself more often than a random series series would This coefficient may take different values between 0 and 1. If H = 0.50 then the process is formed by independent random variables with finite variance If a series is persistent it is called a black noise If the series has increased in the past period, the chances are that it will continue to increase in the next period. in terms of the law of T at ½ the variable covers more distance than the random walk process If a series is antipersistent it is called a pink noise. If the series had been up in the previous period there are a chance is that it will be down in a few periods and vice versa. Then the variable covers less distance than the ranodm walk process. Persistent time series are the most common type found in nature.

8 The coefficient R/Sn The construction of these coefficient doesn’t require any gaussian process, neither it requires any parametric process The series is separated in a small periods, like beginning with a 10 periods, inside the total series, until arriving to periods that are as maximum half of the data analyzed (NO COMENTS) We call n the data analyzed in each sub period and Rn= max(Yt..Yn) - min (Yt..Yn) and . R/Sn = average of Rn/average of Sn where Sn is the volatility of this sub period

9 Some results of the coefficient H
Index Dow Jones 1 2 3 4 5 6 7 Coeff. H 0.628 S.E. of 0.011 R squared 0.974 Const. -0.617 Ln (R/S)n Applying the Hurst Coefficient to the Index Dow Jones the result is that there is persistence in the returns and the coefficient H is different from 0.50 because it takes a value and has a standard deviation of In an interval with 95% of confidence the distance does not include the value 0.50 Ln (n)

10 Some results of the coefficient H
0.589 S.E. of 0.006 R squared 0.987 Const. -0.184 Applying the Hurst Coefficient to the Merval Index. (Merval Index is the index for the Argentine Stocks Exchange in Buenos Aires) has similar analysis than de Dow Jones. The result is that there are persistence in the returns and the coefficient H is different from 0.50 because takes a value and has a standard deviation of In an interval with 96% of confidence the distance does not include the value 0.50.

11 The conclusions of the use of H
The series presents coefficients H over 0.50, that indicates the presence of persistence in the series Using the properties of R/Sn coefficient we can observe the presence of cycles proved by the use of the FFT and its significant tests. It is tempting to use de Hurst exponent to estimate de variance in annual terms, like the following:

12 The market performance
Assets Merval Siderca Bono Pro 2 Global 2027 Period 90-94 95-00 99-00 Obs. 1222 1500 1371 504 Mean 0.109 0.015 0.199 0.057 0.0511 -0.022 Volatility 3.566 2.322 4.314 3.107 1.295 1.1694 Skewness 0.739 -0.383 0.823 -0.32 -0.146 -0.559 Kurtosis 7.053 8.020 7.204 7.216 33.931 21.971 Maximum 24.40 12.08 26.02 17.98 14.46 9.39 Minimum -13.52 -14.76 -18.23 -21.3 -11.78 -9.946 I analyze the performance of Argentine stocks, but in general all the stocks takes values in the same sense as Argentine values. In the picture I mark in red numbers de Skewness and the Kurtosis. These values are very far from the normal distribution.

13 The market performance
.. are the returns n.i.d.? The K-S test: P (Dn<en,0.99)= 0.95 is used to prove that the series has n.i.d. shows the following results: Asset Number of Observations Dn en,0.95 Merval Index 2722 0.0844 Siderca 0.0658 Bono Pro2 1371 0.2179 Bono Global 504 0.2266 Another of the analyses to carry out is to demonstrate if the returns have a normal distribution, that is to say that the returns are distributed identically. To such effects they have undergone the series test of goodness of fit applying Kolmogorov Smirnoff's distributions This table was made starting from test of goodness of fit of the applied adjustment Kolmogorov Smirnoff's distributions where it is to prove that the maxim differs between the values of a theoretical distribution and the values of the real distribution, (Dn), it fulfills the following equation,shown here being en,0.95 = / n 0.5 I can´t say that all the polar bears are white but all the polars bears that I see are withe

14 The independence of returns
The autocorrelation function is the relationship between the stock’s returns at different lags. The Ljung Box or Q-statistic at lag 10: The value of Q-statistic calculated for the lag 10 in each one of the series this accepting the idea that there is a correlation among the data, and this correlation this contradicting the classic hypothesis that indicates that markets are efficient. This statistical coefficient one developed by Ljung-Box is good to determine if there is autocorrelation among the values of the returns

15 The test of hypothesis Ho: r0 .... r10 = 0 H1: some r1 ....rk ¹ 0
Using the Q-Statistic developed by Ljung – Box we can make the hypetesis that we can see in the slide. With a 5% of an alpha error for the first 10 lags I take a critical value of then

16 Hurst coefficient and Ljung Box Q-Statistic
Series Q – Statistic for k = 10 Hurst Coefficient Dow Jones 33.205 0.628 Merval 52.999 0.589 Siderca 51.157 0.787 Pro 2 in dollars 46.384 0.782 I reject the null hypothesis and accept the alternative because all the values are greater than These results are confirmed by the Hurst coefficient. In consequence there is not a white noise in the series. There is a correlation between the returns in the first 10 lags

17 Effect convertibility
Different crisis supported until government's change and the obtaining of the blinder from the MFI Effect convertibility This is the returns of Merval Index during 1990 to end of 2000. I divide this chart in two parts. The first part with the effect of the plan Cavallo or the convertibility 1 peso equals 1 US dollar during the Menem administration with positive shocks up to November 1994 The second part begins in December 1994 up to December The effect of the different crises, like rice effect, vodka effect, caipiriña effect and the different announces of several candidates for president elections in 1999, became this part with an important negative effects

18 In the same sense we can observe the volatilities calculated as a square root of the daily variances. In a quick vision we see that great periods with great volatility are followed by tranquility periods. We remember the Bible “ to seven years of thin cows are followed by seven years of fat cows “

19 Applying Fractal an statistical analysis we can say....
The series of returns are not nid Some rs ¹ 0 The st ¹ s1 t 0.5 4) There values of kurtosis and skewness in the series denote the presence of Heteroscedasticity The series of returns are no normal identically distributed Same estimators of autocorrelation are not null, with the evidence that the returns are not independent To annualize the volatility of an asset we cannot use the law of T at ½ The values of skewness and kurtosis show that the distribution has a great concentration near the mean with certain values which exceed the three deviations from the mean and pull away from the mean producing the effect of leptokurtosis with the extremes elevated

20 The traditional econometrics assumed:
The variance of the errors is a constant The owner of a bond or a stock should be interested in the prediction of a volatility during the period in that he will be a possessor of the asset The problem when we use the traditional econometric models is the treatment of the errors The errors and the volatility are calculated like a non conditional process. To take a real value of the volatility we must take a great quantity of data to obtain the traditional volatility. The owner of a portfolio is interested in the recent past and want to know what will happen in the next few days

21 The Arch model .... We can estimate the best model to predict a variable, like a regression model or an ARIMA model In each model we obtain a residual series like: Following with de traditional econometrics we must estimate the best regression model or the best ARIMA model where the Autocorrelation function and the partial autocorrelation function of the residuals of the model are a white noise After we must analyze the Autocorrelation function and the partial autocorrelation function of the square residuals of the best model

22 Autoregressive Conditional Heterocedastic
Engle 1982 ARCH (q) In the model presented by Engle in 1982 he introduce the concept of the heteroscedasticity and the form to apply the solution of this presence. Here is presented the concept that the square errors of today is a function of the squared errors of the past, this assumption is based on the fact that the Autocorrelation function is a black noise To calculate the estimators of the regression or the arima model, we do a log likefood function including the estimators of the variance regression Autoregressive Conditional Heterocedastic

23 Generalized Autoregressive Conditioned Heteroskedastic
Bollerslev 1986 Tim Bollerslev in 1986 adds to the Arch model the modeling of ht (the residuals of the Arch model and presents a model Generalized Autoregressive Conditioned Heteroskedastic Similarly that we express the square error of to day is a function of the past errors and the error of the estimation this squared error (it is to similar to an arima model) GARCH (q,p) Generalized Autoregressive Conditioned Heteroskedastic

24 A simple prediction of a volatility with Arch model
Where: is the following s2t = variance at day t Rt-1- R = deviation from the mean at day t-1 -

25 If we regress the series on a constant….
c = constant or a mean of the series et = deviation at time t no comments ...if series et is a black noise then there is a presence of ARCH

26 The ACF and the PAC of et2 series
The Ljung Box or Q-statistic at lag 10: MERVAL SIDERCA GLOBAL 2017 01/90 11/94 12/94 12/00 11/98 359.48 479.52 477.93 392.65 151.35 These results are results of apply the process mentioned to Argentine assets. In all the cases, as with the American market, the variances are correlated as we can observe the results in the slide

27 How to model the volatility
With the presence of a black noise and.... Analyzing the ACF and PACF using the same considerations for an ARMA process .... no comments We can identify a model to predict the volatility

28 Using all the data of the decade I model the volatility with a Garch (1,1)
As we can observe the white line is a Garch (1,1) model which follows the real volatility

29 The Garch (1,1) This model was used during with a great success, previous to the “tequila effect” or Mexican crisis no comments

30 Some results of GARCH (1,1) applied to Merval Index
90-00 90-94 95-00 98-00 w 0.125 (0.0019) 0.088 (0.030) 0.203 (0.035) 0.503 (0.138) a 0.141 (0.0090) 0.137 (0.018) 0.152 (0.012) 0.122 (0.020) b 0.847 0.862 (0.016) 0.814 (0.015) 0.760 (0.040) P(Q8) 0.516 0.774 0.779 0.757 The same series has different results in different periods.. Being able to express a fundamental concept, the whole history of the financial series doesn´t matter, we are onty interested in recent facts. This concept is easier to be explained to it to an operator of the stock market than to an econometrician or an actuary, since “the operators have an GARCH put in their heads"

31 The persistence of a Garch (1,1)
The autoregressive root that governs the persistence of the shocks of the volatility is the sum of a + b The condition for the model to have not explosive behaviorr is that the sum of alpha plus beta is less than one. This is de same condition that an arma (1,1) where fi plus tita must be less than one Also a + b allows to predict the volatility for the future periods

32 The persistence and the evolution of a shock on et in (t + t) days
When the sum of Alpha plus beta decreases volatility predicted with the Garch model tends to the non conditional volatility in a few days. In other words a shock persist more days when Alfa plus beta are near the one(red line). When Alfa plus beta are 0.8 (pink line)in 15 days the shock converged to the non conditional volatility

33 With a Garch model, it is assumed that the variance of returns can be a predictable process
If ... for the future t periods ... If we can forecast the volatility for one day as follows... then using a recursive form we can forecat the volatility for the following periods, t+2, t+3,.... t+n. In consecuence we can make se sum of a different periods to predict the volatility ot the t periods

34 The news impact curve and the asymetric models
After 1995, the impact of bad news in the assets prices, introduced the concept of the asymetric models, due to the effect of the great negative impact. No comments The aim of these models is to predict the effect of the catastrophes or the impact of bad news

35 The EGARCH (1,1) Nelson (1991) This model differs from Garch (1,1) in this aspect: Allows the bad news (et and g < 0) to have a bigger impact than the good news in the volatility prediction. no comments

36 Glosten Jaganathan and Runkle
The TARCH (1,1) Glosten Jaganathan and Runkle and Zakoian (1990) no coimments g is a positive estimator with weight when there are negative impacts

37 The following graphic will show the news impact curve
The following graphic will show the news impact curve. To see the behavior of the different models in the graphic, the “y” represents the value of the volatility predicted for time “t”. the “x” represents the value of the error for time “t-1” in the near past. The blue line is the result of a Garch (1,1), as we can observe the result of the forecast is the same for negative and positive errors The green line is the results of Tarch (1,1) and the red lines is the result of Egarch (1,1)

38 The presence of asymetry.
To detect the presence of asymetry we use the cross correlation function between the squared residuals of the model and the standarized residuals calculated as et/st Number of re(t),e(t-k) not null in the first –10 values  Merval Index 01/90 11/94 12/94 12/00 11/99 5 4 no comments

39 What is Value at Risk? VaR measures the worst loss expected in a future time with a confidence level previously established VaR forecasts the amount of predictable losses for the next period with a certain probability no comments

40 Computing VaR VaR makes the sum of the worst loss of each asset over a horizon within an interval of confidence previously established “ .. Now we can know the risk of our portfolio, by asset and by the individual manage … “ The vice president of pension funds of Chrysler Not comments

41 The steps to calculate VaR
days to be forecasted market position Volatility measure The process to compute VaR is the following: Estimate the value of the portfolio with several assets 2)Forecast the volatility for each asset 3)Establish the maximum amount ofdays to estimate the value VaR´s 4)Determine the level of confidence or the probability of the worst loss 5)Make the report of the potential loss by each asset VAR Level of confidence Report of potential loss

42 The success of VaR Is a result of the method used to estimate the risk
The certainty of the report depends from the type of model used to compute the volatility on which these forecast is based No comments

43 The EWMA to estimate the volatility
EWMA, is used by Riskmetrics1 and this method established that the volatility is conditioned bay the past realizations Before 1995 J.P.Morgan presents the Riskmetrics. Only one value needed for estimate de volatility of all the assets. The value of lamba 0.94 is taken to estimate de volatility of each asset using the EWMA model. This model gives more weight the recent volatility and decays exponentially in the past realizations. Riskmetrics predicts the volatility only for one day “ one day forecast “ and use the last 74 realizations 1 Riskmetrics is a trade mark of J.P.Morgan

44 The EWMA and GARCH Using l = 0.94 for EWMA models like was established by the manuals of J. P. Morgan for all assets of the portfolio is the same as using a Garch (1,1) as follows: No comments

45 What happen after 1995 Today, the best model to compute the volatility of a global argentine bond is a Tarch(1,1)

46 Those are the results of the use of Tarch (1,1) and EWMA for the argentine bonds. The red line is the interval of confidence using Riskmetrics, and the black line is using the Tarch (1,1) When there is a crisis the EWMA predicts volatility by default an when the crisis is over volatility is predicted in excess. With the Tarch model the prediction follows the behavior of the market

47 Conclusions Using the ACF and PACF in one hand and using fractal geometry in the other hand we arrive to the following expressions: rs ¹ 0 and sn ¹ st (n/t) 0.5 That allow the use of Garch models to forecast the volatility

48 Conclusions With the right model of Garch we can forecast the volatility for different purposes in this case for the VaR There are different patterns between the returns previous (Mexican crisis) and after it

49 Conclusions If volatility is corrected estimated the result will be a trustable report Each series have its own personality, each series have its own model to predict volatility In other words.. When bad news are reported resources are usefull, when good news are present resources are not needed

50 The Future The use of derivatives for reducing de Var of a portfolio To calculate the primes of derivatives Garch models will be use Questions


Download ppt "The Garch model and their Applications to the VaR"

Similar presentations


Ads by Google