Presentation is loading. Please wait.

Presentation is loading. Please wait.

Experiment Basics: Variables Psych 231: Research Methods in Psychology.

Similar presentations


Presentation on theme: "Experiment Basics: Variables Psych 231: Research Methods in Psychology."— Presentation transcript:

1 Experiment Basics: Variables Psych 231: Research Methods in Psychology

2 Reminders Journal Summary 1 due in labs this week Journal Summary 1

3 Many kinds of Variables Independent variables (explanatory) Dependent variables (response) Extraneous variables Control variables Random variables Confound variables

4 Many kinds of Variables Independent variables (explanatory) Dependent variables (response) Extraneous variables Control variables Random variables Confound variables

5 Measuring your dependent variables Scales of measurement Errors in measurement

6 Scales of measurement Categorical variables Nominal scale Ordinal scale Quantitative variables Interval scale Ratio scale Categories Categories with order Ordered Categories of same size Ordered Categories of same size with zero point Given a choice, usually prefer highest level of measurement possible “Best” Scale?

7 Measuring your dependent variables Scales of measurement Errors in measurement Reliability & Validity Sampling error

8 Example: Measuring intelligence?intelligence Measuring the true score How do we measure the construct? How good is our measure? How does it compare to other measures of the construct? Is it a self-consistent measure?

9 Errors in measurement In search of the “true score” Reliability Do you get the same value with multiple measurements? Validity Does your measure really measure the construct? Is there bias in our measurement? (systematic error)

10 Dartboard analogy Bull’s eye = the “true score” for the construct e.g., a person’s Intelligence Dart Throw = a measurement e.g., trying to measure that person’s Intelligence

11 Dartboard analogy Bull’s eye = the “true score” Reliability = consistency Validity = measuring what is intended unreliable invalid Measurement error Estimate of true score - The dots are spread out - The & are different

12 Dartboard analogy Bull’s eye = the “true score” Reliability = consistency Validity = measuring what is intended reliable valid reliable invalid unreliable invalid biased

13 Errors in measurement In search of the “true score” Reliability Do you get the same value with multiple measurements? Validity Does your measure really measure the construct? Is there bias in our measurement? (systematic error)

14 Reliability True score + measurement error A reliable measure will have a small amount of error Multiple “kinds” of reliability Test-retest Internal consistency Inter-rater reliability

15 Reliability Test-restest reliability Test the same participants more than once Measurement from the same person at two different times Should be consistent across different administrations ReliableUnreliable

16 Reliability Internal consistency reliability Multiple items testing the same construct Extent to which scores on the items of a measure correlate with each other Cronbach’s alpha (α) Split-half reliability Correlation of score on one half of the measure with the other half (randomly determined)

17 Reliability Inter-rater reliability At least 2 raters observe behavior Extent to which raters agree in their observations Are the raters consistent? Requires some training in judgment 5:00 4:56

18 Errors in measurement In search of the “true score” Reliability Do you get the same value with multiple measurements? Validity Does your measure really measure the construct? Is there bias in our measurement? (systematic error)

19 Validity Does your measure really measure what it is supposed to measure? There are many “kinds” of validity

20 VALIDITY CONSTRUCT CRITERION- ORIENTED DISCRIMINANT CONVERGENTPREDICTIVE CONCURRENT FACE INTERNALEXTERNAL Many kinds of Validity

21 VALIDITY CONSTRUCT CRITERION- ORIENTED DISCRIMINANT CONVERGENTPREDICTIVE CONCURRENT FACE INTERNALEXTERNAL Many kinds of Validity

22 Face Validity At the surface level, does it look as if the measure is testing the construct? “This guy seems smart to me, and he got a high score on my IQ measure.”

23 Construct Validity Usually requires multiple studies, a large body of evidence that supports the claim that the measure really tests the construct

24 Internal Validity Did the change in the DV result from the changes in the IV or does it come from something else? The precision of the results

25 Threats to internal validity Experimenter bias & reactivity History – an event happens the experiment Maturation – participants get older (and other changes) Selection – nonrandom selection may lead to biases Mortality (attrition) – participants drop out or can’t continue Regression to the mean – extreme performance is often followed by performance closer to the mean Regression to the mean The SI cover jinx | Madden Curse The SI cover jinxMadden Curse

26 External Validity Are experiments “real life” behavioral situations, or does the process of control put too much limitation on the “way things really work?”

27 External Validity Variable representativeness Relevant variables for the behavior studied along which the sample may vary Subject representativeness Characteristics of sample and target population along these relevant variables Setting representativeness Ecological validity - are the properties of the research setting similar to those outside the lab

28 Measuring your dependent variables Scales of measurement Errors in measurement Reliability & Validity Sampling error

29 Sampling Population Everybody that the research is targeted to be about The subset of the population that actually participates in the research Sample Errors in measurement Sampling error

30 Sampling Sample Inferential statistics used to generalize back Sampling to make data collection manageable Population Allows us to quantify the Sampling error

31 Sampling Goals of “good” sampling: –Maximize Representativeness: –To what extent do the characteristics of those in the sample reflect those in the population –Reduce Bias: –A systematic difference between those in the sample and those in the population Key tool: Random selection

32 Sampling Methods Probability sampling Simple random sampling Systematic sampling Stratified sampling Non-probability sampling Convenience sampling Quota sampling Have some element of random selection Susceptible to biased selection

33 Simple random sampling Every individual has a equal and independent chance of being selected from the population

34 Systematic sampling Selecting every n th person

35 Cluster sampling Step 1: Identify groups (clusters) Step 2: randomly select from each group

36 Convenience sampling Use the participants who are easy to get

37 Quota sampling Step 1: identify the specific subgroups Step 2: take from each group until desired number of individuals

38 Variables Independent variables Dependent variables Measurement Scales of measurement Errors in measurement Extraneous variables Control variables Random variables Confound variables

39 Extraneous Variables Control variables Holding things constant - Controls for excessive random variability Random variables – may freely vary, to spread variability equally across all experimental conditions Randomization A procedure that assures that each level of an extraneous variable has an equal chance of occurring in all conditions of observation. Confound variables Variables that haven’t been accounted for (manipulated, measured, randomized, controlled) that can impact changes in the dependent variable(s) Co-varys with both the dependent AND an independent variable

40 Colors and words Divide into two groups: men women Instructions: Read aloud the COLOR that the words are presented in. When done raise your hand. Women first. Men please close your eyes. Okay ready?

41 Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green List 1

42 Okay, now it is the men’s turn. Remember the instructions: Read aloud the COLOR that the words are presented in. When done raise your hand. Okay ready?

43 Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green List 2

44 Our results So why the difference between the results for men versus women? Is this support for a theory that proposes: “Women are good color identifiers, men are not” Why or why not? Let’s look at the two lists.

45 Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green List 2 Men Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green List 1 Women MatchedMis-Matched

46 What resulted in the performance difference? Our manipulated independent variable (men vs. women) The other variable match/mis-match? Because the two variables are perfectly correlated we can’t tell This is the problem with confounds Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green IV DV Confound Co-vary together

47 What DIDN’T result in the performance difference? Extraneous variables Control # of words on the list The actual words that were printed Random Age of the men and women in the groups These are not confounds, because they don’t co-vary with the IV Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green

48 “Debugging your study” Pilot studies A trial run through Don’t plan to publish these results, just try out the methods Manipulation checks An attempt to directly measure whether the IV variable really affects the DV. Look for correlations with other measures of the desired effects.


Download ppt "Experiment Basics: Variables Psych 231: Research Methods in Psychology."

Similar presentations


Ads by Google