Did welfare reform increase participant employment? Hal W. Snarr Westminster College 12/2/13.

Slides:



Advertisements
Similar presentations
Chap 12-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 12 Simple Regression Statistics for Business and Economics 6.
Advertisements

NOTATION & ASSUMPTIONS 2 Y i =  1 +  2 X 2i +  3 X 3i + U i Zero mean value of U i No serial correlation Homoscedasticity Zero covariance between U.
Forecasting Using the Simple Linear Regression Model and Correlation
Stat 112: Lecture 7 Notes Homework 2: Due next Thursday The Multiple Linear Regression model (Chapter 4.1) Inferences from multiple regression analysis.
Regression Analysis Once a linear relationship is defined, the independent variable can be used to forecast the dependent variable. Y ^ = bo + bX bo is.
Simple Regression Model
LECTURE 3 Introduction to Linear Regression and Correlation Analysis
Chapter 15 Multiple Regression. Regression Multiple Regression Model y =  0 +  1 x 1 +  2 x 2 + … +  p x p +  Multiple Regression Equation y = 
Classical Regression III
Chapter 13 Multiple Regression
Chapter 12 Simple Regression
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 13-1 Chapter 13 Simple Linear Regression Basic Business Statistics 11 th Edition.
Chapter 12 Multiple Regression
1 An example. 2 AirlinePercentage on time Complaints Southwest Continental Northwest US Airways United American
Chapter 13 Introduction to Linear Regression and Correlation Analysis
Fall 2006 – Fundamentals of Business Statistics 1 Chapter 13 Introduction to Linear Regression and Correlation Analysis.
Statistics for Managers Using Microsoft Excel, 5e © 2008 Prentice-Hall, Inc.Chap 13-1 Statistics for Managers Using Microsoft® Excel 5th Edition Chapter.
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 14-1 Chapter 14 Introduction to Multiple Regression Basic Business Statistics 11 th Edition.
Chapter 13 Introduction to Linear Regression and Correlation Analysis
Linear Regression Example Data
Korelasi dalam Regresi Linear Sederhana Pertemuan 03 Matakuliah: I0174 – Analisis Regresi Tahun: Ganjil 2007/2008.
Empirical Estimation Review EconS 451: Lecture # 8 Describe in general terms what we are attempting to solve with empirical estimation. Understand why.
Chapter 14 Introduction to Linear Regression and Correlation Analysis
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 13-1 Chapter 13 Simple Linear Regression Basic Business Statistics 10 th Edition.
Chapter 7 Forecasting with Simple Regression
Chapter 13 Simple Linear Regression
1 Simple Linear Regression 1. review of least squares procedure 2. inference for least squares lines.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 13-1 Chapter 13 Introduction to Multiple Regression Statistics for Managers.
Quantitative Demand Analysis
Lecture 5 Correlation and Regression
Statistics for Business and Economics 7 th Edition Chapter 11 Simple Regression Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall Ch.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 12-1 Chapter 12 Simple Linear Regression Statistics for Managers Using.
Chapter 14 Simple Regression
OPIM 303-Lecture #8 Jose M. Cruz Assistant Professor.
Statistics for Business and Economics 7 th Edition Chapter 11 Simple Regression Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall Ch.
Chapter 14 Introduction to Multiple Regression
© 2003 Prentice-Hall, Inc.Chap 13-1 Basic Business Statistics (9 th Edition) Chapter 13 Simple Linear Regression.
You want to examine the linear dependency of the annual sales of produce stores on their size in square footage. Sample data for seven stores were obtained.
Business Statistics: A First Course, 5e © 2009 Prentice-Hall, Inc. Chap 12-1 Correlation and Regression.
Chap 12-1 A Course In Business Statistics, 4th © 2006 Prentice-Hall, Inc. A Course In Business Statistics 4 th Edition Chapter 12 Introduction to Linear.
EQT 373 Chapter 3 Simple Linear Regression. EQT 373 Learning Objectives In this chapter, you learn: How to use regression analysis to predict the value.
Applied Quantitative Analysis and Practices LECTURE#23 By Dr. Osman Sadiq Paracha.
Project By Vishnu Narasimhan Elizabeth Stillwell Aditya Dhirani Unemployment in the United States.
Regression Analysis of Temporary Assistance for Needy Families/Aid to Families with Dependant Children for By: Ryan Rafacz.
ANOVA for Regression ANOVA tests whether the regression model has any explanatory power. In the case of simple regression analysis the ANOVA test and the.
Statistics for Business and Economics 8 th Edition Chapter 11 Simple Regression Copyright © 2013 Pearson Education, Inc. Publishing as Prentice Hall Ch.
Business Statistics: A Decision-Making Approach, 6e © 2005 Prentice-Hall, Inc. Chap 13-1 Introduction to Regression Analysis Regression analysis is used.
Statistics for Business and Economics 8 th Edition Chapter 11 Simple Regression Copyright © 2013 Pearson Education, Inc. Publishing as Prentice Hall Ch.
Lecture 10: Correlation and Regression Model.
 Input parameters 1, 2, …, n  Values of each denoted X 1, X 2, X n  For each setting of X 1, X 2, X n observe a Y  Each set (X 1, X 2, X n,Y) is one.
Applied Quantitative Analysis and Practices LECTURE#25 By Dr. Osman Sadiq Paracha.
Copyright ©2011 Pearson Education, Inc. publishing as Prentice Hall 14-1 Chapter 14 Introduction to Multiple Regression Statistics for Managers using Microsoft.
Business Statistics: A Decision-Making Approach, 6e © 2005 Prentice- Hall, Inc. Chap 14-1 Business Statistics: A Decision-Making Approach 6 th Edition.
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc.. Chap 14-1 Chapter 14 Introduction to Multiple Regression Basic Business Statistics 10 th Edition.
Business Statistics, 4e, by Ken Black. © 2003 John Wiley & Sons Business Statistics, 4e by Ken Black Chapter 14 Multiple Regression Analysis.
Statistics for Managers Using Microsoft® Excel 5th Edition
Introduction to Multiple Regression Lecture 11. The Multiple Regression Model Idea: Examine the linear relationship between 1 dependent (Y) & 2 or more.
Real Estate Sales Forecasting Regression Model of Pueblo neighborhood North Elizabeth Data sources from Pueblo County Website.
1 Regression Review Population Vs. Sample Regression Line Residual and Standard Error of Regression Interpretation of intercept & slope T-test, F-test.
Multiple Regression Analysis Regression analysis with two or more independent variables. Leads to an improvement.
Multiple Regression The equation that describes how the dependent variable y is related to the independent variables: x1, x2, xp and error term e.
Statistics for Managers Using Microsoft Excel, 5e © 2008 Prentice-Hall, Inc.Chap 14-1 Statistics for Managers Using Microsoft® Excel 5th Edition Chapter.
Regression Modeling Applications in Land use and Transport.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 12-1 Chapter 12 Simple Linear Regression Statistics for Managers Using.
Simple linear regression and correlation Regression analysis is the process of constructing a mathematical model or function that can be used to predict.
Conceptual Foundations © 2008 Pearson Education Australia Lecture slides for this course are based on teaching materials provided/referred by: (1) Statistics.
REGRESSION REVISITED. PATTERNS IN SCATTER PLOTS OR LINE GRAPHS Pattern Pattern Strength Strength Regression Line Regression Line Linear Linear y = mx.
Simple Linear Regression
Presentation transcript:

Did welfare reform increase participant employment? Hal W. Snarr Westminster College 12/2/13

Did welfare reform increase participant employment? The variable above depends on lnPAYTnatural log of the real value of state’s welfare payment(  1 < 0) D 2000 = 1 if the year is 2000, = 0 if it is 1994(  2 > 0) D full = 1 if state adopted full sanction policy, = 0 if not(  3 > 0) BLK share of state population that is black(  4 ≠ 0) DROP share of state population that is HS drop out(  5 < 0) U share of state labor force that is unemployed(  6 < 0)

Descriptive Statistics

Scatterplots (1994, 2000)

R Square Adjusted R Square Standard Error Observations 100 ANOVA dfSSMSF Regression Residual Total CoefficientsStandard Errort StatP-value Intercept lnPAYT r 2 ·100% of the variability in y can be explained by the model. 0% epr of LISM Regression Results Error

R Square0.517 Adjusted R Square0.486 Standard Error6.347 Observations100 ANOVA dfSSMSF Regression Residual Total CoefficientsStandard Errort StatP-value Intercept  lnPAYT  D2000  Dfull  BLK  DROP  U  r 2 ·100% of the variability in y can be explained by the model. 49% epr of LISM Regression Results Error

Error Properties Zero Mean

Error Properties Normality If the errors are not normally distributed and the sample size is small, F stat may not follow the F distribution. It’s p-value may be invalid t stats may not follow the t distribution. Their p-values may be invalid

Error Properties The regression model is linear If the data are not linearly related, Standard errors of estimated coefficients are okay Estimated coefficients are biased

Non-constant variance in black? Error Properties Homoscedasticity If errors are not homoscedastic, Estimated coefficients are okay Coefficient standard errors are wrong

Error Properties No autocorrelation This is generally not an issue if the dataset is cross-sectional Because my data varies in time, the DW stat must be close to 2. DW stat = 0.77 Autocorrelation in the errors is likely If autocorrelation is a problem, Estimated coefficients are okay Their standard errors may be inflated

Error Properties No autocorrelation This is generally not an issue if the dataset is cross-sectional Because my data varies in time, the DW stat must be close to 2. DW stat = 0.77 Autocorrelation in the errors is likely If autocorrelation is a problem, Estimated coefficients are okay Their standard errors may be inflated Since the errors may be heteroscedastic or autocorrelated, F & t tests are unreliable. Excel cannot account for the two, but regression packages (Stata or SAS) can Newey-West standard errors (autocorrelation & heteroscedasticity) Eicker-Huber-White standard errors (heteroscedasticity)

R Square0.517 Adjusted R Square0.486 Standard Error6.347 Observations100 ANOVA dfSSMSF stat Regression ERROR Total CoefficientsStandard Errort StatP-value Intercept  lnPAYT  D2000  Dfull  BLK  DROP  U  Testing for model significance H 0 :  1 =  2 =  3 =  4 =  5 =  6 = 0 Hypothesis Testing  =.05 & row column 2.20 Reject H 0

R Square0.517 Adjusted R Square0.486 Standard Error6.347 Observations100 ANOVA dfSSMSF stat Regression ERROR Total CoefficientsStandard Errort StatP-value Intercept  lnPAYT  D2000  Dfull  BLK  DROP  U  Hypothesis Testing Reject H row  =.05  /2 =.025 (column) Testing for coefficient significance H 0 :  i = 0

R Square0.517 Adjusted R Square0.486 Standard Error6.347 Observations100 ANOVA dfSSMSF stat Regression ERROR Total CoefficientsStandard Errort StatP-value Intercept  lnPAYT  D2000  Dfull  BLK  DROP  U  Hypothesis Testing Reject H 0 DNR H  =.05  /2 =.025 (column) Testing for coefficient significance H 0 :  i = 0

R Square0.517 Adjusted R Square0.486 Standard Error6.347 Observations100 ANOVA dfSSMSF stat Regression ERROR Total CoefficientsStandard Errort StatP-value Intercept  lnPAYT  D2000  Dfull  BLK  DROP  U  Hypothesis Testing Reject H 0 DNR H DNR H 0  =.05  /2 =.025 (column) Testing for coefficient significance H 0 :  i = 0

R Square0.517 Adjusted R Square0.486 Standard Error6.347 Observations100 ANOVA dfSSMSF stat Regression ERROR Total CoefficientsStandard Errort StatP-value Intercept  lnPAYT  D2000  Dfull  BLK  DROP  U  Hypothesis Testing Reject H DNR H 0  =.05  /2 =.025 (column) Testing for coefficient significance H 0 :  i = 0

R Square0.517 Adjusted R Square0.486 Standard Error6.347 Observations100 ANOVA dfSSMSF stat Regression ERROR Total CoefficientsStandard Errort StatP-value Intercept  lnPAYT  D2000  Dfull  BLK  DROP  U  Hypothesis Testing Reject H 0 DNR H DNR H 0 Reject H 0  =.05  /2 =.025 (column) Testing for coefficient significance H 0 :  i = 0

R Square0.517 Adjusted R Square0.486 Standard Error6.347 Observations100 ANOVA dfSSMSF stat Regression ERROR Total CoefficientsStandard Errort StatP-value Intercept  lnPAYT  D2000  Dfull  BLK  DROP  U  Hypothesis Testing Reject H DNR H 0 Reject H 0 DNR H 0  =.05  /2 =.025 (column) Testing for coefficient significance H 0 :  i = 0

Estimated coefficient b 1 is significant: Increasing monthly benefit levels for a family of three by 10% would result in a.54 percentage point reduction in the epr of LISM Estimated coefficient b 2 is insignificant: Welfare reform in general had no effect on the epr of LISM. Interpretation of Results Estimated coefficient b 3 is significant ( at  = 0.10) : The epr of LISM is percentage points higher in states that adopted the full sanction policy

Interpretation of Results Estimated coefficient b 4 is significant: Each 10 pct. point increase in the share of blacks is associated with a 2.91 percentage point decline in the epr of LISM. Estimated coefficient b 5 is significant (at  = 0.10) : Each 10 pct. point increase in the HS dropout rate is associated with a 3.74 percentage point decline in the epr of LISM. Estimated coefficient b 6 is significant: Each 1 pct. point increase in unemployment is associated with a percentage point decline in the epr of LISM.

Conclusions Increasing monthly benefit levels for a family of three reduces the epr of LISM Welfare reform in general had no effect on the epr of LISM. The epr of LISM is higher in states that adopted the full sanction policy. Culture and urbanity matter. States with higher HS dropout rates have lower LISM employment rates. States with higher unemployment have lower LISM employment rates.