Presentation on theme: "Vector Autoregressions and Impulse Response Functions"— Presentation transcript:
1Vector Autoregressions and Impulse Response Functions The VAR
2Introduction Assess the selection of the optimal lag length in a VAR Evaluate the use of impulse response functions with a VARAssess the importance of variations on the standard VARCritically appraise the use of VAR s with financial models.Assess the uses of VECMs
3Lag Length in VARWhen estimating VARs or conducting ‘Granger causality’ tests, the test can be sensitive to the lag length of the VARSometimes the lag length corresponds to the data, such that quarterly data has 4 lags, monthly data has 12 lags etc.A more rigorous way to determine the optimal lag length is to use the Akaike or Schwarz-Bayesian information criteria.However the estimations tend to be sensitive to the presence of autocorrelation, in this case following the use of information criteria, if there is any evidence of autocorrelation, further lags are added, above the number indicated by the information criteria, until the autocorrelation is removed.
4Information CriteriaThe main information criteria are the Schwarz-Bayesian criteria and the Akaike criteria.They operate on the basis that there are two competing factors from adding more lags to a model. More lags will reduce the RSS, but also means a loss of degrees of freedom (penalty from adding more lags).The aim is the minimise the information criteria, by adding an extra lag, it will only benefit the model if the reduction in the RSS outweighs the loss of degrees of freedom.In general the Schwarz-Bayesian (SBIC) has a harsher penalty term than the Akaike (AIC), which leads it to indicate a parsimonious model is best.
6Multivariate Information Criteria The multivariate version of the Akaike information criteria is similar to the univariate:
7Multivariate SBICThe multivariate version of the SBIC is:
8The best criterionIn general there is no agreement on which criteria is best (Diebold for instance recommends the SBIC).The Schwarz-Bayesian is strongly consistent but not efficient.The Akaike is not consistent, generally producing too large a model, but is more efficient than the Schwarz-Bayesian criteria.
9VAR ModelsIf we assume a 2 variable model, with a single lag, we can write this VAR model as:
10Criticisms of Causality Tests Granger causality test, much used in VAR modelling, however do not explain some aspects of the VAR:It does not give the sign of the effect, we do not know if it is positive or negativeIt does not show how long the effect lasts for.It does not provide evidence of whether this effect is direct or indirect.
11Impulse Response Functions These trace out the effect on the dependent variables in the VAR to shocks to all the variables in the VARTherefore in a system of 2 variables, there are 4 impulse response functions and with 3 there are 9.The shock occurs through the error term and affects the dependent variable over time.In effect the VAR is expressed as a vector moving average model (VMA), as in the univariate case previously, the shocks to the error terms can then be traced with regard to their impact on the dependent variable.If the time path of the impulse response function becomes 0 over time, the system of equations is stable, however they can explode if unstable.
14VARs and SURIn general the VAR has all the lag lengths of the individual equations the same size.It is possible however to have different lag lengths for different equations, however this involves another estimation method.When lag lengths differ, the seemingly unrelated regression (SUR) approach can be used to estimate the equations, this is often termed a ‘near-VAR’.
15Alternative VARsIt is possible to include contemporaneous terms in a VAR, however in this case the VAR is not identified.It is also possible to include exogenous variables in the VAR, although they do not have separate equations where they act as a dependent variable. They simply act as extra explanatory variables for all the equations in the VAR.It is worth noting that the impulse response functions can also produce confidence intervals to determine whether they are significant, this is routinely done by most computer programmes.
16VECMsVector Error Correction Models (VECM) are the basic VAR, with an error correction term incorporated into the model.The reason for the error correction term is the same as with the standard error correction model, it measures any movement away from the long-run equilibrium.These are often used as part of a multivariate test for cointegration, such as the Johansen ML test.
17VECMsHowever there are a number of differing approaches to modelling VECMs, for instance how many lags should there be on the error correction term, usually just one regardless of the order of the VARThe error correction term becomes more difficult to interpret, as it is not obvious which variable it affects following a shock
18VECMThe most basic VECM is the following first-order VECM:
19Criticisms of the VARMany argue that the VAR approach is lacking in theory.There is much debate on how the lag lengths should be determinedIt is possible to end up with a model including numerous explanatory variables, with different signs, which has implications for degrees of freedom.Many of the parameters will be insignificant, this affects the efficiency of a regression.There is always a potential for multicollinearity with many lags of the same variable
20Stationarity and VARsShould a VAR include only stationary variables, to be valid?Sims argues that even if the variables are not stationary, they should not be first-differenced.However others argue that a better approach is a multivariate test for cointegration and then use first-differenced variables and the error correction term
21VAR ExampleThe basic theory behind this following model is simply that we believe there is a relationship between short-term (TBILL) and long-term (R10) interest rates. We believe the VAR approach is the best model in this case.Following the Akaike and Schwarz-Bayesian criteria, we find a VAR of order 1 is most appropriate, this produces the following results (edited) :
22VAR Result OLS estimation of a single equation in the Unrestricted VAR ******************************************************************************Dependent variable is TBILL127 observations used for estimation from 1960Q2 to 1991Q4Regressor Coefficient Standard Error T-Ratio[Prob]TBILL(-1) [.000]R10(-1) [.823]K [.120]R-Squared R-Bar-SquaredAkaike Info. Criterion Schwarz Bayesian CriterionSerial Correlation*CHSQ( 4)= [.000]Dependent variable is R10TBILL(-1) [.006]R10(-1) [.000]K [.052]R-Squared R-Bar-SquaredAkaike Info. Criterion Schwarz Bayesian CriterionSerial Correlation*CHSQ( 4)= [.071]
23Granger Causality Test ******************************************************************************Dependent variable is R10List of the variables deleted from the regression: TBILL (-1)127 observations used for estimation from 1960Q2 to 1991Q4Regressor Coefficient Standard Error T-Ratio[Prob]R10(-1) [.000]K [.146]Joint test of zero restrictions on the coefficients of deleted variables:F Statistic F( 1, 124)= [.006]Dependent variable is TBILLList of the variables deleted from the regression: R10(-1)TBILL(-1) [.000]K [.088]F Statistic F( 1, 124)= [.823]*****************************************************************************
25Summary of ResultsWhen TBILL is the dependent variable, only TBILL(-1) is significant, but the model has serious autocorrelationWhen R10 is the dependent variable, both variables are significant.TBILL ‘Granger causes’ R10, but not vice versaImpulse Response Functions show most of the effect from a unit shock comes through the lagged dependent variable, but the shock falls away to zero fairly quickly.
26Granger Causality Tests Continued According to Granger, causality can be further sub-divided into long-run and short-run causality.This requires the use of error correction models or VECMs, depending on the approach for determining causality.Long-run causality is determined by the error correction term, whereby if it is significant, then it indicates evidence of long run causality from the explanatory variable to the dependent variable.Short-run causality is determined as before, with a test on the joint significance of the lagged explanatory variables, using an F-test or Wald test.
27Long-run CausalityBefore the ECM can be formed, there first has to be evidence of cointegration, given that cointegration implies a significant error correction term, cointegration can be viewed as an indirect test of long-run causality.It is possible to have evidence of long-run causality, but not short-run causality and vice versa.In multivariate causality tests, the testing of long-run causality between two variables is more problematic, as it is impossible to tell which explanatory variable is causing the causality through the error correction term.
28Causality ExampleThe following basic ECM was produced, following evidence of cointegration between s and y:
29Causality ExampleIn the previous example, there is long-run causality between s to y, as the error correction term is significant (t-ratio of 7).There is no evidence of short-run causality as the lagged differenced explanatory variable is insignificant (t-ratio of 1, an F-test would also include insignificance).
30ConclusionVARs have a number of important uses, particularly causality tests and forecastingTo assess the affects of any shock to the system, we need to use impulse response functions and variance decompositionVECMs are an alternative, as they allow first-differenced variables and an error correction term.The VAR has a number of weaknesses, most importantly its lack of theoretical foundations