Presentation is loading. Please wait.

Presentation is loading. Please wait.

Simultaneous Forecasting of Non-stationary conditional Mean & Variance Speaker: Andrey Torzhkov September 25 th, 2006.

Similar presentations


Presentation on theme: "Simultaneous Forecasting of Non-stationary conditional Mean & Variance Speaker: Andrey Torzhkov September 25 th, 2006."— Presentation transcript:

1 Simultaneous Forecasting of Non-stationary conditional Mean & Variance Speaker: Andrey Torzhkov September 25 th, 2006

2 Agenda Introduction Introduction Methodology Methodology Model Model Markovian part Markovian part Bayesian part Bayesian part Autoregressive part Autoregressive part Estimation Estimation Conclusions Conclusions

3 Introduction Why forecast conditional moments? Why forecast conditional moments? Basestock inventory policies: Basestock inventory policies: Volatility Gamma-trading: Volatility Gamma-trading: Empirical regression rules: Empirical regression rules: Portfolio-management, etc. Portfolio-management, etc.

4 Introduction How to recover conditional moments? How to recover conditional moments? Proxies: moving average and variance Proxies: moving average and variance Analytical models: ARIMA, GARCH etc. Analytical models: ARIMA, GARCH etc. Proxies often exhibit non-stationarity Proxies often exhibit non-stationarity Most of analytical models do require stationarity Most of analytical models do require stationarity

5 Introduction Why is non-stationarity important? Why is non-stationarity important? Market demands follow consumer trends and react to management actions Market demands follow consumer trends and react to management actions Asset prices follow economic trends and react to market innovations Asset prices follow economic trends and react to market innovations Non-stationarity arise due to interplay between controllable and stochastic behavior Non-stationarity arise due to interplay between controllable and stochastic behavior

6 Typical features Underlying controllable factor(s) Underlying controllable factor(s) Regime-switches -> Shifting mean Regime-switches -> Shifting mean Anticipatory behavior -> Volatility spikes Anticipatory behavior -> Volatility spikes Heterogeneous beliefs -> Long-term dynamics Heterogeneous beliefs -> Long-term dynamics Co-integrated movement of mean & vol. Co-integrated movement of mean & vol.

7

8 Methodology BMA: Bayesian-Markov-Autoregression model BMA: Bayesian-Markov-Autoregression model Stochastic trend part: long-to-medium term Stochastic trend part: long-to-medium term Market learning -> Bayesian Market learning -> Bayesian Market regimes -> Markov Market regimes -> Markov Market beliefs -> Non-homogeneity Market beliefs -> Non-homogeneity Autoregressive part: short term Autoregressive part: short term Volatility clusters -> GARCH Volatility clusters -> GARCH Mean momentum -> ARMA Mean momentum -> ARMA Overlay: unit A-model scaled by BM-trends Overlay: unit A-model scaled by BM-trends

9 Unit Scaling is original moments pair is original moments pair is the trend processes pair is the trend processes pair is the unit processes pair is the unit processes pair

10 Markovian part The model assumption is that there exists a fundamental indicator which is Normally distributed Markov-modulated. The actual observations are (independently) drawn from the Gaussian conditional forecast distribution of the one-day ahead value of The model assumption is that there exists a fundamental indicator which is Normally distributed Markov-modulated. The actual observations are (independently) drawn from the Gaussian conditional forecast distribution of the one-day ahead value of

11 Semi-Markov Chain 3 states (e.g. {Ease, Stay, Tight}). 3 states (e.g. {Ease, Stay, Tight}). Each state is a couple (Mean, Volatility) Each state is a couple (Mean, Volatility) Hence, the underlying factor values: Hence, the underlying factor values: State jumps p.d.f. is represented via – the probability of jump next day State jumps p.d.f. is represented via – the probability of jump next day

12 Semi-Markov Chain State transitions are represented by - the probability of switching into one of two alternative states ( or ) in the EMC State transitions are represented by - the probability of switching into one of two alternative states ( or ) in the EMC The whole chain dynamics is driven by a pair of implied probabilities corresponding to the current state c only The whole chain dynamics is driven by a pair of implied probabilities corresponding to the current state c only Trend pair is linked to the factor as follows: Trend pair is linked to the factor as follows:

13 Markov equations Trend dynamics is driven by the non-homogeneity of implied probabilities (market beliefs) Trend dynamics is driven by the non-homogeneity of implied probabilities (market beliefs) Hence, the model can be fit to match long-run trends via solving Markov equations given the state means and volatilities Hence, the model can be fit to match long-run trends via solving Markov equations given the state means and volatilities

14 Bayesian part The model assumption is that state parameters are unknown and change with each jump. These values are learned continuously through time by observing the underlying factor values (via a Bayesian updating rule). The model assumption is that state parameters are unknown and change with each jump. These values are learned continuously through time by observing the underlying factor values (via a Bayesian updating rule).

15 Bayesian rules Unknown state mean and volatility pair learning: Unknown state mean and volatility pair learning: Unknown current state learning: Unknown current state learning:

16 Bayesian learning The state learning rule exhibits fast convergence to degenerate distribution The state learning rule exhibits fast convergence to degenerate distribution Updating is being made on a rolling horizon (last n observations) Updating is being made on a rolling horizon (last n observations) Hence, the Bayesian learning here is just a proper way of implementing the moving-average technique Hence, the Bayesian learning here is just a proper way of implementing the moving-average technique

17 Overlay of Markov and Bayes Markov equations are rewritten in the form: Markov equations are rewritten in the form:

18 Autoregressive part The A-part applies to pair The A-part applies to pair It is a combination of regular ARMA- and GARCH-type stationary unit models It is a combination of regular ARMA- and GARCH-type stationary unit models Particular example: AR(1)-GARCH(1,1)-M model Particular example: AR(1)-GARCH(1,1)-M model

19 Estimation: historical trends Rolling T-test: detects mean shifts Rolling T-test: detects mean shifts compute the Max. & Min. significance windows at each point of time compute the Max. & Min. significance windows at each point of time identify “lines” in test window observations identify “lines” in test window observations detect dates of “line” terminations detect dates of “line” terminations Trend estimation Trend estimation Robust regression-based splines Robust regression-based splines

20

21

22

23 Estimation: Markov chain “map” Input: Input: Pair of historical trends Pair of historical trends Regime switch dates Regime switch dates State parameters estimation methods: State parameters estimation methods: GMM: higher moments matching GMM: higher moments matching ML: implied probabilities matching ML: implied probabilities matching Other: tail-index (of aggregated model) matching Other: tail-index (of aggregated model) matching

24

25 Estimation: implied probabilities Monte-Carlo simulation + Linear Programming approach: Monte-Carlo simulation + Linear Programming approach:

26

27 Estimation: A-model parameters Iterative regression-based algorithm converging to the Kalman-filtered solution: Iterative regression-based algorithm converging to the Kalman-filtered solution: GLS estimates of and GLS estimates of and OLS estimates of and OLS estimates of and Two sets of estimates are bound by the “same-noise” relation: Two sets of estimates are bound by the “same-noise” relation:

28

29

30 Forecasting: scenario based Scenario components: Scenario components: Key future dates -> time-knots array Key future dates -> time-knots array Jump simulation -> real jump and switch probabilities Jump simulation -> real jump and switch probabilities Trend continuation -> implied jump and switch probabilities Trend continuation -> implied jump and switch probabilities Post-jump states parameters -> (Mean, Volatility) state pairs Post-jump states parameters -> (Mean, Volatility) state pairs Post-jump trend renewal -> renewed implied probabilities Post-jump trend renewal -> renewed implied probabilities

31 Conclusions The model features: The model features: Flexible and broad – potentially, any time series which exhibits regime-switches and anticipatory behavior can be fitted Flexible and broad – potentially, any time series which exhibits regime-switches and anticipatory behavior can be fitted Novel – Combination of Bayesian & Markov approaches applied to forecasting stochastic non- stationary trends has not been described in the econometric literature yet Novel – Combination of Bayesian & Markov approaches applied to forecasting stochastic non- stationary trends has not been described in the econometric literature yet Future research: Future research: Multivariate version of the model Multivariate version of the model


Download ppt "Simultaneous Forecasting of Non-stationary conditional Mean & Variance Speaker: Andrey Torzhkov September 25 th, 2006."

Similar presentations


Ads by Google