Presentation is loading. Please wait.

Presentation is loading. Please wait.

Statistical Properties of Returns Predictability of Returns.

Similar presentations


Presentation on theme: "Statistical Properties of Returns Predictability of Returns."— Presentation transcript:

1 Statistical Properties of Returns Predictability of Returns

2 Asset Return Predictability Previously: Introduced some of the issues and the datasets available locally. Now: Continued discussion of tests of predictability (CLM Ch. 2). –Forms of the random walk hypothesis and martingales. –Tests of the random walk hypothesis: CJ test, runs test, technical trading LMW (2000).

3 Asset Return Predictability Why we care and what we’ll do: –Try to forecast future returns using only past returns. –Weak form efficiency says that one shouldn’t be able to make abnormal profits (over the appropriate risk-adjusted returns) using past return information alone. –Want to test different versions of the random walk hypothesis.

4 The Random Walk Hypothesis - Taxonomy Consider returns r t and r t+k. Consider functions f(r t ) and g(r t+k ). CLM considers when Cov[f(r t ), g(r t+k )] = 0 for all t and k  0. Almost all forms of the RW hypothesis are captured by this equation, which can be thought of as an “orthogonality condition,” using various restrictions on f and g.

5 Independent vs. Uncorrelated - Aside Two random variables, X and Y, are independent if for any real numbers x and y Pr(X  x and Y  y) = Pr(X  x)Pr(Y  y) Can also be defined by X and Y are independent if for all pairs of functions f and g Cov(f(X), g(Y)) = 0. Also if F(X,Y) = F X (X)  F Y (Y) (also works with pdfs). Lack of Correlation is simply: Cov(X, Y) = 0. Clearly, independence is much stronger.

6 Random Walk Taxonomy Cov[f(r t ),g(r t+k )] = 0g(r t+k ) is linear  g(r t+k ) f(r t+k ) is linear uncorrelated increments: RW3 Proj[r t+k |r t ] = μ ---  f(r t+k ) Martingale/Fair Game E [r t+k |r t ] = μ Independent Increments: RW1, RW2 pdf(r t+k |r t ) = pdf(r t+k )

7 Martingales A stochastic process {P t } (i.e. price) which satisfies: E[P t+1 – P t | P t, P t-1, …] = 0 This is also called a fair game. If P t is an asset price, this says that conditional on past prices, the best guess at tomorrow’s stock price is today’s. Price changes can not be forecast using past prices. Non-overlapping price changes are uncorrelated at all leads and lags if the price process follows a martingale. (As a homework exercise show this follows from the definition.)

8 Martingales It was once thought that prices following a martingale was a necessary condition for an efficient capital market. If there are to be no profits available from trading on past price information, the conditional expectation, conditional on price history, of future price changes cannot be positive or negative so must be zero. –The more efficient the market, the more random are prices. Ignores a risk/return tradeoff. Compensation for risk requires drift. Cox and Ross and Harrison and Kreps show that properly risk adjusted returns (log price changes) do follow a martingale.

9 RW1: IID Increments The governing equation for the most restrictive of the random walk processes is: P t = μ + P t-1 + ε t the ε t are i.i.d. with mean zero and variance σ 2. After t periods: E[P t |P 0 ] = P 0 + μ t Var [P t |P 0 ] = σ 2 t These results also hold for RW2 and RW3.

10 Distributional Assumptions A common assumption is to suppose the ε t are i.i.d. N(0,σ 2 ). This makes prices behave as an arithmetic Brownian motion, sampled at evenly spaced intervals. This makes life very easy because we can work with the normal distribution but it violates limited liability. Suppose log prices follow this process: p t = μ + p t-1 + ε t so that continuously compounded returns are i.i.d. normal. In this case prices follow a geometric Brownian motion, an assumption often used in continuous-time asset pricing models.

11 RW2 – Independent Increments To think that price changes are identically distributed over long periods is unpalatable. RW2 retains the independence of the increments but allows them to be drawn from different distributions. This means that we can allow for unconditional heteroskedasticity in the ε t ’s, something that fits with the data. Any arbitrary transformation of past prices is useless in predicting (any arbitrary transformation of) future prices changes.

12 RW3 – Uncorrelated Increments Weakest of the RW hypothesis. You can’t forecast future price increments. Higher moments (e.g., variance) may be forecastable. That is there may be conditional heteroskedasticity in the innovation process over time. e.g.: Cov(ε t, ε t-k ) = 0 but Cov(ε 2 t, ε 2 t-k )  0.

13 Tests of RW1 Sequences and Reversals –Start with prices following a geometric Brownian motion without drift: p t = p t-1 + ε t, where the ε t ~ i.i.d.N(0, σ 2 ). –Let I t equal one if p t – p t-1 is positive and zero otherwise. –Cowles and Jones (1937) compare the frequency of sequences of two returns with the same sign to the frequency of successive returns with a reversal of signs.

14 Sequences and Reversals Let there be n+1 returns (t = 0,1,2,…,n), N s be the number of sequences and N r = n – N s be the number of reversals. If log-prices follow a driftless random walk, and the distributions of the ε’s is symmetric, then positive and negative increments are equally likely, and the CJ ratio should be approximately one. ĈJ  N s /N r. This ratio may be seen as a consistent estimator of the probability of a sequence to the probability of a reversal π s / π r = π s /(1- π s ).

15 Sequences and Reversals Consistency here means convergence in probability. In this case, under the null a sequence and a reversal are equally likely, the ratio should converge to one. CJ(1937) found a value of 1.17 for the ratio using an index of railroad stocks from and concluded that stock returns are predictable.

16 Issue: Drift If prices follow a geometric Brownian motion with drift: p t = μ + p t-1 + ε t, where ε t ~i.i.d. N(0, σ 2 ). Now the indicator variable I t is biased in the direction of the drift: I t = 1 with probability  where π  Pr(r t > 0) = Φ(μ / σ) With a postive drift, π s > ½ and with negative drift π s < ½.

17 Drift In this case, The ratio is strictly greater than one for any π  ½.

18 The Effect Of Drift We can calibrate to US annual data. Let μ =.08, and σ =.21, then The CJ statistic becomes π s /(1- π s ) = 1.19, very close to the 1.17 that CJ found.

19 The Effect Of Drift Statistical Significance? –Is 1.19 statistically significantly different from 1.17? –Is 1.17 statistically significantly different from 0? –The answer to this requires a measure of standard errors and so the sampling theory for the CJ statistic.

20 The Effect Of Drift Sampling Theory –Start with the fact that N s is a binomial random variable, that is it is the sum of n Bernoulli random variables, Y t where Y t = 1 with probability π s = π 2 + (1- π) 2 and zero otherwise. –Using the normal approximation to the binomial, the distribution for N s for large n has: Mean n π s Variance n π s (1- π s ) + 2(π 3 + (1- π) 3 - π s 2 ) Its variance is not n π s (1- π s ) because the drift makes each pair of Y’s dependent (CLM pg 37).

21 The Effect Of Drift Then asymptotically, CJ(1937)’s estimate of 1.17 yields The above equation then yields a standard error estimate of Under the null, 1.19 is not different from 1.17 nor is 1.17 different from 1.0.

22 The Effect Of Drift You would need a much higher π to find any significance. How high? To reject the random walk using this test you would need a π of about Thus you’d need almost a ¾ chance of prices going up (or down) every year to detect deviations from a random walk with this test. This test doesn’t have much power to detect deviations from the random walk given the historical estimates of the parameters of the US economy.

23 The Runs Test Used to detect “streakiness” in data. For example: identify streaks in athletic performances (basketball, baseball). The idea is to look at the data and see whether it was generated by a set of binomial trials where the probability is estimated in-sample.

24 The Runs Test Consider the sequence: It has six “runs” in it – three of 1’s (of lengths 1,3, and 1) and three of 0’s (of length 2,1, and 2). Now suppose you have a sample of n multinomial trials –Let π I be the probability that an event of type I occurs in a period. –Then, N runs (i) is the total number of runs of the ith type. To do any testing we must find the sampling distribution of the number of runs in this situation.

25 Mood (1940) finds for the discrete distribution:

26 The Runs Test Intuition for the Bernoulli case: 2 types, up and down Let π be the probability of an “up.” Then, the expected total number of runs is 2n π(1- π) + π 2 + (1- π) 2 –Note that this value is maximized at π = ½.

27 The Runs Test What is the sensitivity of the total number of runs to drift? In CLM table 2, the total number of runs for a sample of 1000 for a geometric random walk with normally distributed increments, drifts of zero through 20% and a standard deviation of 21% is presented. In this case π = Φ(μ/σ). The expected number of runs falls from 500 at π = ½ to only at μ = 20% or π = 83% (if σ = 21%). There are expected to be fewer runs but the drift means that the runs of “ups” will be longer.

28 The Runs Test Bernoulli test statistic Wallis and Roberts Continuity correction: Fama(1965) finds no evidence against the RW using the runs test. Recently the theory has been extended to non-i.i.d. sequences and in other ways, but we will not examine these contributions.


Download ppt "Statistical Properties of Returns Predictability of Returns."

Similar presentations


Ads by Google