Presentation is loading. Please wait.

Presentation is loading. Please wait.

Methods of Presenting and Interpreting Information Class 9.

Similar presentations


Presentation on theme: "Methods of Presenting and Interpreting Information Class 9."— Presentation transcript:

1 Methods of Presenting and Interpreting Information Class 9

2 Why Use Statistics? Describe or characterize information Describe or characterize information Allow researchers to make inferences on samples vis-à-vis the population from which it was drawn Allow researchers to make inferences on samples vis-à-vis the population from which it was drawn Assess validity of hypotheses Assess validity of hypotheses

3 Descriptive Measures Basic Measures Basic Measures –Central Tendency Means, median, mode Means, median, mode Supports group comparisons Supports group comparisons –Variability – whether the Score is Typical Variability is important because it tells us about whether the distributions of scores across groups are equivalent Variability is important because it tells us about whether the distributions of scores across groups are equivalent Standard deviations is the primary measure (about two thirds of scores tend to fall within one SD of the mean) Standard deviations is the primary measure (about two thirds of scores tend to fall within one SD of the mean) –Association or correlation – whether variables go together Positive or negative association Positive or negative association Association tells us what happens to one variable when the other one changes Association tells us what happens to one variable when the other one changes No causal claim No causal claim

4 Hypothesis Testing Evaluation of the null hypothesis is often stated in terms of the differences in scores between groups. If they are different, we then ask whether the differences were observed by chance (ie, if we repeated the experiment, would we observe the same result? Or a similar result? How often would we observe these differences?) Evaluation of the null hypothesis is often stated in terms of the differences in scores between groups. If they are different, we then ask whether the differences were observed by chance (ie, if we repeated the experiment, would we observe the same result? Or a similar result? How often would we observe these differences?) Types of Error Types of Error –Type I error – we falsely reject the null hypothesis when in fact it may be true. That is, we assume differences that do not exist. we assume that populations are different when they are alike vis-a- vis the IV-DV relationship –Type I errors are assessed by the level of significance we choose in our statistical tests B if we set significance at.05, we assume that the differences we observed did not happen by more than a 5% chance. That is, we are accepting the odds that the sample differences might have appeared by chance fewer than 5 times in one hundred. The more extreme the criterion, the less likely the sample differences occurred by chance

5 –Type II error – we accept the null hypothesis when, in fact, it is false. That is, we conclude that populations do not differ when in fact they do –Type II errors occur in inverse proportions to Type I errors. That is, we fail to recognize population differences when they may exist. The only way to reduce Type II error while maintaining a high threshold for Type I error is to use larger sample sizes, or use statistical methods that use more of the available information The only way to reduce Type II error while maintaining a high threshold for Type I error is to use larger sample sizes, or use statistical methods that use more of the available information Statistical power of a test is the probability that the test will reject the hypothesis tested when a specific alternative hypothesis is true. To calculate the power of a given test, it is necessary to specify β (the probability that the test will lead to the rejection of the hypothesis tested when that hypothesis is true) and to specify a specific alternative hypothesis Statistical power of a test is the probability that the test will reject the hypothesis tested when a specific alternative hypothesis is true. To calculate the power of a given test, it is necessary to specify β (the probability that the test will lead to the rejection of the hypothesis tested when that hypothesis is true) and to specify a specific alternative hypothesis

6 http://www.stat.sc.edu/~ogden/javahtml/power/power.html

7 Statistical Tests Choosing a model Choosing a model –Statistical procedures should be keyed to the level of measurement in both the independent and dependent variables Tests of Association Tests of Association –Chi square -- differences between observed and expected frequencies Tests of Differences between Means Tests of Differences between Means –ANOVA models, ANCOVA models Tests of differences in distributions (means, std deviations) between groups Tests of differences in distributions (means, std deviations) between groups Controls for covariates in instances where you assume there may be differences between groups that are systematic (not random) Controls for covariates in instances where you assume there may be differences between groups that are systematic (not random)

8 Multivariate Models Ordinary Least Squares (OLS) Regression, or Multiple Regression Ordinary Least Squares (OLS) Regression, or Multiple Regression –tells you which combination of variables, and in what priority, influence the distribution of a dependent variable. –It should be used with ratio or interval variables, although there is a controversy regarding its validity when used with ordinal-level variables. OLS regression is used more often in survey research and non-experimental research, although it can be used to isolate a specific variable whose influence you want to test OLS regression is used more often in survey research and non-experimental research, although it can be used to isolate a specific variable whose influence you want to test You can introduce interaction terms that isolate the effects to specific subgroups (eg, race by gender). You can introduce interaction terms that isolate the effects to specific subgroups (eg, race by gender). If you do it right, you can control and eliminate statistical correlations between the independent variables If you do it right, you can control and eliminate statistical correlations between the independent variables Logistic Regression is a form of regression specifically designed for binary dependent variables (e.g., group membership) Logistic Regression is a form of regression specifically designed for binary dependent variables (e.g., group membership)

9 Fitting a Regression Line

10 Fitting a Regression Line, from Eisenberg and Wells (2006)

11 Example of OLS Regression Results, from Fagan and Tyler (2005)

12 Complex Causation Structural Equations Models Structural Equations Models –This family of statistical procedures looks at complex relationships among multiple dependent variables over time. It can accommodate feedback loops, or hypothesized reciprocal relationships. It gives you a probability estimate for each path Hierarchical Models Hierarchical Models –When effects are nested – when independent variables exist at more than one ‘level’ of explanation –School research example School factors (e.g., teacher experience, average SES of parents, computer resources, parent involvement) School factors (e.g., teacher experience, average SES of parents, computer resources, parent involvement) Individual factors (e.g., student IQ, family income, parental supervision, number of siblings, sibling school performance) Individual factors (e.g., student IQ, family income, parental supervision, number of siblings, sibling school performance)

13 Example of SEM Analysis, from Tyler and Fagan (2006)

14 How Good is the Model? What Does It Tell Us? Most multivariate models generate probability estimates for each variable in the model, and also for the overall model Most multivariate models generate probability estimates for each variable in the model, and also for the overall model –Model Statistics: “model fit” or “explained variance” are the two most important –Independent Variables Coefficient estimate Coefficient estimate Standard Error Standard Error Statistical Significance Statistical Significance

15 Alternatives to Statistical Significance Alternatives to Statistical Significance –Odds Ratio – the odds of having been exposed given the presence of a disease (ratio) compared to the odds of not having been exposed given the presence of the disease (ratio) –Risk Ratio – the risk of a disease in the population given exposure (ratio) compared to the risk of a disease given no exposure (ratio, or the base rate) –Attributable Risk – (Rate of disease among the unexposed – Rate of disease among the exposed) (Rate of disease among the exposed)

16 Example from Breast-Feeding Study

17 In this example, the odds ratio is a way of comparing whether the probability of a certain event is the same for two groups. An odds ratio of 1 implies that the event is equally likely in both groups. An odds ratio greater than one implies that the event is more likely in the first group. An odds ratio less than one implies that the event is less likely in the first group.In the control row, the odds for exclusive breast feeding at discharge are 27 to 20 or odds slightly in favor of exclusive breast feeding. In this example, the odds ratio is a way of comparing whether the probability of a certain event is the same for two groups. An odds ratio of 1 implies that the event is equally likely in both groups. An odds ratio greater than one implies that the event is more likely in the first group. An odds ratio less than one implies that the event is less likely in the first group.In the control row, the odds for exclusive breast feeding at discharge are 27 to 20 or odds slightly in favor of exclusive breast feeding. In the treatment row, the odds are 10 to 32 or odds strongly against exclusive breast feeding. In the treatment row, the odds are 10 to 32 or odds strongly against exclusive breast feeding. The odds ratio for exclusive breast feeding at discharge is (27/20) / (10/32) = 4.32. The odds ratio for exclusive breast feeding at discharge is (27/20) / (10/32) = 4.32. Since this number is larger than 1, we conclude that breast feeding at discharge is less likely in the experimental group. Since this number is larger than 1, we conclude that breast feeding at discharge is less likely in the experimental group.


Download ppt "Methods of Presenting and Interpreting Information Class 9."

Similar presentations


Ads by Google