Computational Finance II: Time Series K.Ensor. What is a time series? Anything observed sequentially (by time?) Returns, volatility, interest rates, exchange.

Slides:



Advertisements
Similar presentations
Autocorrelation Functions and ARIMA Modelling
Advertisements

Dates for term tests Friday, February 07 Friday, March 07
COMM 472: Quantitative Analysis of Financial Decisions
Economics 20 - Prof. Anderson1 Stationary Stochastic Process A stochastic process is stationary if for every collection of time indices 1 ≤ t 1 < …< t.
Model Building For ARIMA time series
Unit Roots & Forecasting
STAT 497 APPLIED TIME SERIES ANALYSIS
Stationary Stochastic Process
How should these data be modelled?. Identification step: Look at the SAC and SPAC Looks like an AR(1)- process. (Spikes are clearly decreasing in SAC.
1 Alberto Montanari University of Bologna Simulation of synthetic series through stochastic processes.
Forecasting Purpose is to forecast, not to explain the historical pattern Models for forecasting may not make sense as a description for ”physical” behaviour.
Econometric Details -- the market model Assume that asset returns are jointly multivariate normal and independently and identically distributed through.
1 Def: Let and be random variables of the discrete type with the joint p.m.f. on the space S. (1) is called the mean of (2) is called the variance of (3)
1 Power 2 Econ 240C. 2 Lab 1 Retrospective Exercise: –GDP_CAN = a +b*GDP_CAN(-1) + e –GDP_FRA = a +b*GDP_FRA(-1) + e.
Some Important Time Series Processes Fin250f: Lecture 8.3 Spring 2010.
Lecture 24 Multiple Regression (Sections )
ARIMA Forecasting Lecture 7 and 8 - March 14-16, 2011
Economics 20 - Prof. Anderson
Modern methods The classical approach: MethodProsCons Time series regression Easy to implement Fairly easy to interpret Covariates may be added (normalization)
Quantitative Business Analysis for Decision Making Simple Linear Regression.
K. Ensor, STAT Spring 2005 The Basics: Outline What is a time series? What is a financial time series? What is the purpose of our analysis? Classification.
Modern methods The classical approach: MethodProsCons Time series regression Easy to implement Fairly easy to interpret Covariates may be added (normalization)
1 Terminating Statistical Analysis By Dr. Jason Merrick.
BOX JENKINS METHODOLOGY
Box Jenkins or Arima Forecasting. H:\My Documents\classes\eco346\Lectures\chap ter 7\Autoregressive Models.docH:\My Documents\classes\eco346\Lectures\chap.
ARMA models Gloria González-Rivera University of California, Riverside
TIME SERIES by H.V.S. DE SILVA DEPARTMENT OF MATHEMATICS
Oceanography 569 Oceanographic Data Analysis Laboratory Kathie Kelly Applied Physics Laboratory 515 Ben Hall IR Bldg class web site: faculty.washington.edu/kellyapl/classes/ocean569_.
Intro. ANN & Fuzzy Systems Lecture 26 Modeling (1): Time Series Prediction.
It’s About Time Mark Otto U. S. Fish and Wildlife Service.
Time Series Basics (2) Fin250f: Lecture 3.2 Fall 2005 Reading: Taylor, chapter , 3.9(skip 3.6.1)
Week 21 Stochastic Process - Introduction Stochastic processes are processes that proceed randomly in time. Rather than consider fixed random variables.
K. Ensor, STAT Spring 2004 Memory characterization of a process How would the ACF behave for a process with no memory? What is a short memory series?
3.Analysis of asset price dynamics 3.1Introduction Price – continuous function yet sampled discretely (usually - equal spacing). Stochastic process – has.
MULTIVARIATE TIME SERIES & FORECASTING 1. 2 : autocovariance function of the individual time series.
Surveying II. Lecture 1.. Types of errors There are several types of error that can occur, with different characteristics. Mistakes Such as miscounting.
Components of Time Series Su, Chapter 2, section II.
Ch16: Time Series 24 Nov 2011 BUSI275 Dr. Sean Ho HW8 due tonight Please download: 22-TheFed.xls 22-TheFed.xls.
Computational Finance II: Time Series Guest Lecture II K.Ensor.
Testing for equal variance Scale family: Y = sX G(x) = P(sX ≤ x) = F(x/s) To compute inverse, let y = G(x) = F(x/s) so x/s = F -1 (y) x = G -1 (y) = sF.
Stationarity and Unit Root Testing Dr. Thomas Kigabo RUSUHUZWA.
Analysis of financial data Anders Lundquist Spring 2010.
EC 827 Module 2 Forecasting a Single Variable from its own History.
Forecasting. Model with indicator variables The choice of a forecasting technique depends on the components identified in the time series. The techniques.
1 Autocorrelation in Time Series data KNN Ch. 12 (pp )
Multiple Random Variables and Joint Distributions
Covariance, stationarity & some useful operators
Time Series Analysis.
Time Series Analysis and Its Applications
Lecture 8 ARIMA Forecasting II
Chapter 6: Autoregressive Integrated Moving Average (ARIMA) Models
Statistics 153 Review - Sept 30, 2008
Econ 240 C Lecture 4.
Model Building For ARIMA time series
Autocorrelation.
Stochastic models - time series.
Hidden Markov Autoregressive Models
Box-Jenkins models Stationary Time series AR(p), MA(q), ARMA(p,q)
Machine Learning Week 4.
Stochastic models - time series.
State Space Models.
The Spectral Representation of Stationary Time Series
These slides are based on:
Autocorrelation.
WHY WE FILTER TO IDENTIFY THE RELATIONSHIP
Time Series introduction in R - Iñaki Puigdollers
Why does the autocorrelation matter when making inferences?
CH2 Time series.
BOX JENKINS (ARIMA) METHODOLOGY
Stationary Stochastic Process
Presentation transcript:

Computational Finance II: Time Series K.Ensor

What is a time series? Anything observed sequentially (by time?) Returns, volatility, interest rates, exchange rates, bond yields, … Hourly temperature, variations of the thickness of a wire as a function of length

What is different? The observations are not independent. There is correlation from observation to observation. Consider the log of the J&J series. Is there correlation in the observations over time?

What are our objectives? Understanding / Modeling Estimating summary measures (e.g. mean) Prediction / Forecasting If correlation is present between the observations then our typical approaches are not correct (assume iid samples).

Autocorrelation? How would you determine or show correlation over time?

Autocorrelation Function In theory… How to estimate this quantity?

Sample ACF and PACF Sample ACF – sample estimate of the autocorrelation function. Substitute sample estimates of the covariance between X(t) and X(t+h). Note: We do not have “n” pairs but “n-h” pairs. Subsitute sample estimate of variance. Sample PACF – correlation between observations X(t) and X(t+h) after removing the linear relationship of all observations in that fall between X(t) and X(t+h).

Summary Plots

Detrending by taking first difference. Y(t)=X(t) – X(t-1) What happens to the trend? Suppose X(t)=a+bt+Z(t) Z(t) is a random variable.

Detrended J&J series: Autocorrelation?

Sumary Plots of Detrended J&J log earnings per share.

Removing Seasonal Trend – one way to proceed. Suppose Y(t)=g(t)+W(t) where g(t)=g(t-s) where s is our “season” for all t. W(t) is again a new random variable Form a new series U(t) by taking the “s” difference U(t)=Y(t)-Y(t-s) =g(t)-g(t-s) + W(t)-W(t-s) =W(t)-W(t-s) again a random variable

Summary of Transformed J&J Series

Summary of Transformations: X(t) = log (Q(t)) Y(t)=X(t)-X(t-1) = (1-B)X(t) U(t)= (1-B 4 )Y(t) U(t)=(1-B 4 ) (1-B)X(t)

What is the next step? U(t) is a time series process called a moving average of order 1 (or possibly a MA(1) plus a seasonal MA(1)) U(t)=(t-1) + (t) Proceed to estimate and then we can estimate summary information about the earnings per share as well as predict.

Forecast of J&J series

Why does the autocorrelation matter when making inferences? Consider estimation of the mean of a stationary series E[X(t)]= for all t If X(1),…,X(n) are iid what is the sampling distribution of the estimator for , namely the sample mean?

Why does the autocorrelation matter? What if X(t) has the following structure (autoregressive model of order 1 AR(1) ) X(t)- =  (X(t-1)- ) +  (t) Then Corr(X(t),X(t+h))=  |h| for all h And Var(X)= (1+  Var(X)/n

Comparing the samples size? Let m denote the number of iid obs. Let n denote the number of correlated obs. Setting the variances equal and solving for m as a function of n yields m=n(1-  Let n=100, then m=5 iid obs. If n=100 and  then the equivalent number of iid observations is For positive and negative  (correlation of lag 1) the equivalent sample sizes are 33 and 300.

Why? Why does the autocorrelation make such a big difference in our ability to estimate the mean? The same arguments for other mean functions of the process or other functions of the process we want to estimate.

Summary Times series is correlated data, sequentially observed. The autocorrelation is a measure of the this correlation over the time lag. This dependence structure along with proper assumptions allows us to forecast the future of the process. Correct inference requires incorporating knowledge of the dependence structure.