Download presentation

Presentation is loading. Please wait.

Published byEstefania Evrard Modified about 1 year ago

1
One-Tailed Test 5%) Steps in Hypothesis Testing: 1.State the hypotheses 2.Identify the test statistic and its probability distribution 3.Specify the significance level 4.State the decision rule 5.Collect the data and perform the calculations 6.Make the statistical decision 7.Make the economic or investment decision Two-Tailed Test 5%) Hypothesis Testing Null hypothesis: 0 Alternative hypothesis: > SE 0 SE 00 Null hypothesis: = 0 Alternative hypothesis: 0 where 0 is the hypothesised mean Rejection area

2
Test Concerning a Single Mean Test Statistic: Hypothesis Testing – Test Statistic & Errors Type I and Type II Errors Type I error is rejecting the null when it is true. Probability = significance level. Type II error is failing to reject the null when it is false. The power of a test is the probability of correctly rejecting the null (i.e. rejecting the null when it is false) Do not reject null Reject null Correct Type I error Type II error Correct DecisionH 0 true H 0 false

3
Normally distributed populations and independent samples Examples of hypotheses: Hypothesis about Two Population Means Population variances unknown but assumed to be equal s 2 is a pooled estimator of the common variance Degrees of freedom = (n 1 + n 2 - 2) Population variances unknown and cannot be assumed equal

4
Normally distributed populations and samples that are not independent - “Paired comparisons test” Possible hypotheses: Hypothesis about Two Population Means Symbols and other formula Application The data is arranged in paired observations Paired observations are observations that are dependent because they have something in common E.g. dividend payout of companies before and after a change in tax law

5
Possible hypotheses: Assuming normal population Hypothesis about a Single Population Variance Symbols s 2 = variance of the sample data 0 2 = hypothesized value of the population variance n = sample size Degrees of freedom = n – 1 NB: For one-tailed test use or (1 – ) depending on whether it is a right-tail or left-tail test. Chi-square distribution is asymmetrical and bounded below by 0 Lower critical value Fail to reject H 0 Reject H 0 Obtained from the Chi- square tables. (df, 1 - /2 ) Obtained from the Chi-square tables. (df, /2 ) Higher critical value

6
Possible hypotheses: Assuming normal populations Hypothesis about Variances of Two Populations Degrees of freedom: numerator = n 1 - 1, denominator = n The convention is to always put the larger variance on top F Distributions are asymmetrical and bounded below by 0 Critical value Fail to reject H 0 Reject H 0 Obtained from the F-distribution table for: - one tailed test /2 - two tailed test

7
Scatter Plots Correlation Analysis Testing the Significance of the Correlation Coefficient Set H o : = 0, and H a : ≠ 0 Reject null if |test statistic| > critical t Sample Covariance and Correlation Coefficient Correlation coefficient measures the direction and extent of linear association between two variables x x x x x x x x x x x x x xx x x x x x y x Degrees of freedom = (n - 2)

8
Parametric tests: rely on assumptions regarding the distribution of the population, and are specific to population parameters. All tests covered on the previous slides are examples of parametric tests. Parametric and nonparametric tests Nonparametric tests: either do not consider a particular population parameter, or make few assumptions about the population that is sampled. Used primarily in three situations: when the data do not meet distributional assumptions when the data are given in ranks when the hypothesis being addressed does not concern a parameter (e.g. is a sample random or not?)

9
Mean of i values = 0 Linear Regression x x x x x x x x x xx x x x x x x x x Y, dependent variable XiXi i error term or residual X, independent variable YiYi Least squares regression finds the straight line that minimises Basic idea: a linear relationship between two variables, X and Y. Note that the standard error of estimate (SEE) is in the same units as ‘Y’ and hence should be viewed relative to ‘Y’.

10
The Components of Total Variation

11
ANOVA, Standard Error of Estimate & R 2 Coefficient of determination R 2 is the proportion of the total variation in y that is explained by the variation in x Coefficient of determination R 2 is the proportion of the total variation in y that is explained by the variation in x Interpretation When correlation is strong (weak, i.e. near to zero) R 2 is high (low) Standard error of the estimate is low (high) Interpretation When correlation is strong (weak, i.e. near to zero) R 2 is high (low) Standard error of the estimate is low (high) Standard Error of Estimate Sum of squares regression (SSR) Sum of squared errors (SSE) Sum of squares total (SST)

12
Assumptions & Limitations of Regression Analysis Assumptions 1.The relationship between the dependent variable, Y, and the independent variable, X, is linear 2.The independent variable, X, is not random 3.The expected value of the error term is 0 4.The variance of the error term is the same for all observations (homoskedasticity) 5.The error term is uncorrelated across observations (i.e. no autocorrelation) 6.The error term is normally distributed Limitations 1.Regression relations change over time (non-stationarity) 2.If assumptions are not valid, the interpretation and tests of hypothesis are not valid 3.When any of the assumptions underlying linear regression are violated, we cannot rely on the parameter estimates, test statistics, or point and interval forecasts from the regression

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google