Slide 6.1 Linear Hypotheses MathematicalMarketing In This Chapter We Will Cover Deductions we can make about  even though it is not observed. These include.

Slides:



Advertisements
Similar presentations
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 9 Inferences Based on Two Samples.
Advertisements

A. The Basic Principle We consider the multivariate extension of multiple linear regression – modeling the relationship between m responses Y 1,…,Y m and.
Regression and correlation methods
Hypothesis Testing Steps in Hypothesis Testing:
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Econ 140 Lecture 81 Classical Regression II Lecture 8.
Ch11 Curve Fitting Dr. Deshi Ye
Simple Linear Regression
8. Heteroskedasticity We have already seen that homoskedasticity exists when the error term’s variance, conditional on all x variables, is constant: Homoskedasticity.
Chapter 12 Simple Linear Regression
1 1 Slide © 2014 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
The Multiple Regression Model Prepared by Vera Tabakova, East Carolina University.
Chapter 12 Multiple Regression
Analysis of Variance: Inferences about 2 or More Means
4. Multiple Regression Analysis: Estimation -Most econometric regressions are motivated by a question -ie: Do Canadian Heritage commercials have a positive.
The Simple Regression Model
SIMPLE LINEAR REGRESSION
Economics 20 - Prof. Anderson1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 2. Inference.
T-test.
Chapter 11 Multiple Regression.
k r Factorial Designs with Replications r replications of 2 k Experiments –2 k r observations. –Allows estimation of experimental errors Model:
Chapter 2 Simple Comparative Experiments
EC Prof. Buckles1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 2. Inference.
Copyright © 2014 by McGraw-Hill Higher Education. All rights reserved.
Inferences About Process Quality
SIMPLE LINEAR REGRESSION
Simple Linear Regression Analysis
Modern Navigation Thomas Herring
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Slides by JOHN LOUCKS & Updated by SPIROS VELIANITIS.
SIMPLE LINEAR REGRESSION
Regression Analysis (2)
Multiple Linear Regression - Matrix Formulation Let x = (x 1, x 2, …, x n )′ be a n  1 column vector and let g(x) be a scalar function of x. Then, by.
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Chapter Inference on the Least-Squares Regression Model and Multiple Regression 14.
Inferences in Regression and Correlation Analysis Ayona Chatterjee Spring 2008 Math 4803/5803.
1 1 Slide © 2005 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 11 Inferences About Population Variances n Inference about a Population Variance n.
Slide 10.1 Structural Equation Models MathematicalMarketing Chapter 10 Structural Equation Models In This Chapter We Will Cover The theme of this chapter.
1 1 Slide Simple Linear Regression Coefficient of Determination Chapter 14 BA 303 – Spring 2011.
1 1 Slide © 2014 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
Chapter 11 Linear Regression Straight Lines, Least-Squares and More Chapter 11A Can you pick out the straight lines and find the least-square?
1 Chapter 12 Simple Linear Regression. 2 Chapter Outline  Simple Linear Regression Model  Least Squares Method  Coefficient of Determination  Model.
Multiple Regression and Model Building Chapter 15 Copyright © 2014 by The McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin.
6-1 Introduction To Empirical Models Based on the scatter diagram, it is probably reasonable to assume that the mean of the random variable Y is.
Lesson Multiple Regression Models. Objectives Obtain the correlation matrix Use technology to find a multiple regression equation Interpret the.
VI. Regression Analysis A. Simple Linear Regression 1. Scatter Plots Regression analysis is best taught via an example. Pencil lead is a ceramic material.
Copyright (C) 2002 Houghton Mifflin Company. All rights reserved. 1 Understandable Statistics S eventh Edition By Brase and Brase Prepared by: Lynn Smith.
Trees Example More than one variable. The residual plot suggests that the linear model is satisfactory. The R squared value seems quite low though,
1Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 2. Inference.
T Test for Two Independent Samples. t test for two independent samples Basic Assumptions Independent samples are not paired with other observations Null.
Dependent (response) Variable Independent (control) Variable Random Error XY x1x1 y1y1 x2x2 y2y2 …… xnxn ynyn Raw data: Assumption:  i ‘s are independent.
Chapter 12 Simple Linear Regression n Simple Linear Regression Model n Least Squares Method n Coefficient of Determination n Model Assumptions n Testing.
1 1 Slide The Simple Linear Regression Model n Simple Linear Regression Model y =  0 +  1 x +  n Simple Linear Regression Equation E( y ) =  0 + 
Stats & Summary. The Woodbury Theorem where the inverses.
1 1 Slide © 2011 Cengage Learning Assumptions About the Error Term  1. The error  is a random variable with mean of zero. 2. The variance of , denoted.
MathematicalMarketing Slide 5.1 OLS Chapter 5: Ordinary Least Square Regression We will be discussing  The Linear Regression Model  Estimation of the.
Chapter 11 Linear Regression and Correlation. Explanatory and Response Variables are Numeric Relationship between the mean of the response variable and.
Independent Samples: Comparing Means Lecture 39 Section 11.4 Fri, Apr 1, 2005.
Inference about the slope parameter and correlation
Chapter 14 Inference on the Least-Squares Regression Model and Multiple Regression.
Math 4030 – 10b Inferences Concerning Variances: Hypothesis Testing
Chapter 2 Simple Comparative Experiments
Statistics for Business and Economics (13e)
Slides by JOHN LOUCKS St. Edward’s University.
Chapter 11 Inferences About Population Variances
Chapter 12 Inference on the Least-squares Regression Line; ANOVA
Statistical Inference about Regression
Simple Linear Regression
Topic 11: Matrix Approach to Linear Regression
Statistical Inference for the Mean: t-test
St. Edward’s University
Presentation transcript:

Slide 6.1 Linear Hypotheses MathematicalMarketing In This Chapter We Will Cover Deductions we can make about  even though it is not observed. These include  Confidence Intervals  Hypotheses of the form H 0 :  i = c  Hypotheses of the form H 0 :  i  c  Hypotheses of the form H 0 : a′  = c  Hypotheses of the form A  = c We also cover deductions when V(e)   2 I (Generalized Least Squares)

Slide 6.2 Linear Hypotheses MathematicalMarketing The Variance of the Estimator V(y) = V(X  + e) = V(e) =  2 I From these two raw ingredients and a theorem: we conclude

Slide 6.3 Linear Hypotheses MathematicalMarketing What of the Distribution of the Estimator? As normal Central Limit Property of Linear Combinations

Slide 6.4 Linear Hypotheses MathematicalMarketing So What Can We Conclude About the Estimator? From the Central Limit Theorem From the V(linear combo) + assumptions about e From Ch 5- E(linear combo)

Slide 6.5 Linear Hypotheses MathematicalMarketing Steps Towards Inference About  In general In particular (X′X) -1 X′y But note the hat on the V!

Slide 6.6 Linear Hypotheses MathematicalMarketing Lets Think About the Denominator where d ii are diagonal elements of D = (XX) -1 = {d ij }

Slide 6.7 Linear Hypotheses MathematicalMarketing Putting It All Together Now that we have a t, we can use it for two types of inference about  :  Confidence Intervals  Hypothesis Testing

Slide 6.8 Linear Hypotheses MathematicalMarketing A Confidence Interval for  i A 1 -  confidence interval for  i is given by which simply means that

Slide 6.9 Linear Hypotheses MathematicalMarketing Graphic of Confidence Interval ii 

Slide 6.10 Linear Hypotheses MathematicalMarketing Statistical Hypothesis Testing: Step One H 0 :  i = c H A :  i ≠ c Generate two mutually exclusive hypotheses:

Slide 6.11 Linear Hypotheses MathematicalMarketing Statistical Hypothesis Testing Step Two Summarize the evidence with respect to H 0 :

Slide 6.12 Linear Hypotheses MathematicalMarketing Statistical Hypothesis Testing Step Three reject H 0 if the probability of the evidence given H 0 is small

Slide 6.13 Linear Hypotheses MathematicalMarketing One Tailed Hypotheses Our theories should give us a sign for Step One in which case we might have H 0 :  i  c H A :  i < c In that case we reject H 0 if

Slide 6.14 Linear Hypotheses MathematicalMarketing A More General Formulation Consider a hypothesis of the form H 0 : a´  = c so if c = 0… tests H 0 :  1 =  2 tests H 0 :  1 +  2 = 0 tests H 0 :

Slide 6.15 Linear Hypotheses MathematicalMarketing A t test for This More Complex Hypothesis We need to derive the denominator of the t using the variance of a linear combination which leads to

Slide 6.16 Linear Hypotheses MathematicalMarketing Multiple Degree of Freedom Hypotheses

Slide 6.17 Linear Hypotheses MathematicalMarketing Examples of Multiple df Hypotheses tests H 0 :  2 =  3 = 0 tests H 0 :  1 =  2 =  3

Slide 6.18 Linear Hypotheses MathematicalMarketing Testing Multiple df Hypotheses

Slide 6.19 Linear Hypotheses MathematicalMarketing Another Way to Think About SS H We could calculate the SS H by running two versions of the model: the full model and a model restricted to just  1 SS H = SS Error (Restricted Model) – SS Error (Full Model) so F is Assume we have an A matrix as below:

Slide 6.20 Linear Hypotheses MathematicalMarketing A Hypothesis That All  ’s Are Zero If our hypothesis is Then the F would be Which suggests a summary for the model

Slide 6.21 Linear Hypotheses MathematicalMarketing Generalized Least Squares f = eV -1 e When we cannot make the Gauss-Markov Assumption that V(e) =  2 I Suppose that V(e) =  2 V. Our objective function becomes

Slide 6.22 Linear Hypotheses MathematicalMarketing SS Error for GLS with

Slide 6.23 Linear Hypotheses MathematicalMarketing GLS Hypothesis Testing H 0 :  i = 0where d ii is the ith diagonal element of (XV -1 X) -1 H 0 : a  = c H 0 : A  - c = 0

Slide 6.24 Linear Hypotheses MathematicalMarketing Accounting for the Sum of Squares of the Dependent Variable e′e = y′y - y′X(X′X) -1 X′y SS Error = SS Total - SS Predictable y′y = y′X(X′X) -1 X′y + e ′ e SS Total = SS Predictable + SS Error

Slide 6.25 Linear Hypotheses MathematicalMarketing SS Predicted and SS Total Are a Quadratic Forms And SS Total yy = yIy SS Predicted is Here we have defined P = X(X′X) -1 X′

Slide 6.26 Linear Hypotheses MathematicalMarketing The SS Error is a Quadratic Form Having defined P = X(XX) -1 X, now define M = I – P, i. e. I - X(XX) -1 X. The formula for SS Error then becomes

Slide 6.27 Linear Hypotheses MathematicalMarketing Putting These Three Quadratic Forms Together SS Total = SS Predictable + SS Error yIy = yPy + yMy I = P + M here we note that

Slide 6.28 Linear Hypotheses MathematicalMarketing M and P Are Linear Transforms of y = Py and e = My so looking at the linear model: and again we see that I = P + M Iy = Py + My

Slide 6.29 Linear Hypotheses MathematicalMarketing The Amazing M and P Matrices = Py and = SS Predicted = y′Py e = My and = SS Error = y′My What does this imply about M and P?

Slide 6.30 Linear Hypotheses MathematicalMarketing The Amazing M and P Matrices = Py and = SS Predicted = y′Py e = My and = SS Error = y′My PP = P MM = M

Slide 6.31 Linear Hypotheses MathematicalMarketing In Addition to Being Idempotent…