Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Ka-fu Wong University of Hong Kong A Brief Review of Probability, Statistics, and Regression for Forecasting.

Similar presentations


Presentation on theme: "1 Ka-fu Wong University of Hong Kong A Brief Review of Probability, Statistics, and Regression for Forecasting."— Presentation transcript:

1 1 Ka-fu Wong University of Hong Kong A Brief Review of Probability, Statistics, and Regression for Forecasting

2 2 Random variable A random variable is a mapping from the set of all possible outcomes to the real numbers. Today’s Hang Seng Index can go up, down or stay the same as yesterday. Consider the movement of Hang Seng Index in a month of 22 trading days. We can define a random variable Y of number of days in which Hang Seng Index goes up. In this case, Y assumes 22 values, y=1, y=2, …, y=22. Discrete random variables can assume only a countable number of values. A discrete probability distribution describes the probability of occurrence for all the events. For instance, p i is the probability that event i will occur. Continuous random variables can assume a continuum of values. A probability density function, f(y), is a nonnegative continuous function such that the area under f(y) between any points a and b is the probability that Y assumes a value between a and b.

3 3 Moments Mean (measures central tendency): Variance (measures dispersion around mean): Standard deviation: Skewness (measures the amount of asymmetry in a distribution): Kurtosis (measures the thickness of the tails in a distribution):

4 4 Multivariate Random Variables Joint distribution: Covariance (measures dependence between two variables): Correlation: Conditional distribution: Conditional mean Conditional variance

5 5 Statistics Sample mean: Sample variance:or Sample standard deviation: or

6 6 Statistics Sample skewness: Sample kurtosis: Jarque-Bera test statistics: Under null of independent normally distributed observations, JB is distributed in large samples as a chi-square distribution with two degrees of freedom.

7 7 Example What is our expectation of y given x=0?

8 8 Forecast Suppose we want to forecast the value of a variable y, given the value of a variable x. Denote that forecast y f │ x.

9 9 Conditional expectation as a forecast Think of y and x as random variables jointly drawn from some underlying population. It seems reasonable to consider constructing the forecast of y based on x as the expected value of y conditional on x, i.e., y f │ x = E(y │ x ), the average population value of y given that value of x. E(y │ x ) is also called the population regression of y (on x).

10 10 Conditional expectation as a forecast The expected value of y conditional on x y f │ x = E(y │ x ), It turns out that in many reasonable forecasting settings, this forecast has optimal properties (e.g., minimizing expected loss), and (approximating) this forecast guides our choice of forecast method.

11 11 Unbiasedness of Conditional expectation as a forecast The forecast error will be y - E(y │ x ) Expected forecast error = E[y - E(y │ x )] = E(y)-E[E(y │ x )] = E(y)-E(y) = 0 Thus the conditional expectation is an unbiased forecast. Note that another name for E(y │ x ) is the population regression of y (on x).

12 12 Some operational assumptions about E(y | x) In order to proceed in this direction, we need to make some additional assumptions about the underlying population and, in particular, the form of E(y │ x ). The simplest assumption to make is to assume that the conditional expectation is a linear function of x, i.e., assume E(y │ x ) = β 0 + β 1 x If β 0 and β 1 are known, then the forecast problem is completed by setting y f │ x = β 0 + β 1 x

13 13 When parameters are unknown Even if the conditional expectation is linear in x, the parameters β 0 and β 1 will be unknown. The next best thing for us to do would be to estimate the values of β 0 and β 1 and use the estimated β’s in place of their actual values to form the forecasts. This substitution will not provide as accurate a forecast, since we’re introducing a new source of forecast error due to “estimation error” or “sampling error.” However, under certain conditions the resulting forecast will still be unbiased and retain certain optimality properties.

14 14 When parameters are unknown Suppose we have access to a sample of T pairs of (x,y) drawn from the population from which the relevant value of y will be drawn: (x 1,y 1 ),(x 2,y 2 ),…,(x T,y T ). In this case, a natural estimator of β 0 and β 1 is the ordinary least squares (OLS) estimator, which is obtained by minimizing the sum of squared residuals  y t –  β 0 – β 1 x t   with respect to β 0 and β 1. The solution are the OLS estimates and. Then, for a given value of x, we can forecast y according to

15 15 Fitting a regression line Estimating β 0 and β 1

16 16 When parameters are unknown This estimation procedure, also called the sample regression of y on x, will provide us with a “good” estimate of the conditional expectation of y given x (i.e., the population regression of y on x) and, therefore, a “good” forecast of y given x, provided that certain additional assumptions apply to the relationship between y and x. Let ε denote the difference between y and E(y │ x ). That is, ε = y - E(y │ x ) i.e.,y = E(y │ x ) + ε and y = β 0 + β 1 x + ε, if E(y │ x ) = β 0 + β 1 x.

17 17 When parameters are unknown The assumptions that we need pertain to these ε’s (the “other factors” that determine y) and their relationship to the x’s. For instance, so long as E(ε t │ x 1,…,x T ) = 0 for t = 1,…,T, the OLS estimator of β 0 and β 1 based on the data (x 1,y 1 ),…,(x T,y T ) will be unbiased and, as a result, the forecast constructed by replacing these “population parameters” with the OLS estimates will be unbiased. A standard set of assumptions that provide us with a lot of value – Given x 1,…,x T, ε 1,…,ε T are i.i.d. N(0,σ 2 ) random variables.

18 18 When parameters are unknown These ideas and procedures extend naturally to the setting where we want to forecast the value of y based on the values of k other variables, say, x 1,…,x k. We begin by considering the conditional expectation or population regression of y on x 1,…,x k to make our forecast. That is, y f │ x 1,…,x k = E(y │ x 1,…,x k ) To operationalize this forecast, we first assume that the conditional expectation is linear, i.e., E(y │ x 1,…,x k ) = β 0 + β 1 x 1 + … + β k x k

19 19 When parameters are unknown The unknown β’s are generally replaced the estimate from a sample OLS regression. Suppose we have the data set (y 1,x 11,…,x k1 ), (y 2,x 12,…,x k2 ), …, (y T,x 1T,…,x kT ) The OLS estimate of the unknown parameters are obtained by minimizing the sum-of-squared residuals,  (y t – β 0 – β 1 x 1t - … - β k x kt ) 2, t = 1,…,T. As in the case of the simple regression model, this procedure to estimate the population regression function will have good properties provided that the regression errors ε t = y t – E(y t │ x 1t,…,x kt ), t = 1,…,T have appropriate properties.

20 20 Example Multiple Linear regression

21 21 Residual plots

22 22 Density Forecasts and Interval Forecasts The procedures we described above produce point forecasts of y. They can also be used to produce density and interval forecasts of y, provided that the x’s and the regression errors, i.e., the ε’s, meet certain conditions.

23 23 End


Download ppt "1 Ka-fu Wong University of Hong Kong A Brief Review of Probability, Statistics, and Regression for Forecasting."

Similar presentations


Ads by Google