Presentation is loading. Please wait.

Presentation is loading. Please wait.

Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology.

Similar presentations

Presentation on theme: "Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology."— Presentation transcript:

1 Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology

2 Announcements The required articles for the class experiment paper are already on-line at the Milner course reserves page

3 Class Experiment Collect the forms (consent forms and data summary sheets) - pass to the front Brief discussion: –So how did it go? –What happened? –Any thing unusual/unexpected? –Any problems?

4 Example: How can we measure intelligence? Reliability & Validity

5 Reliability True score + measurement error –A reliable measure will have a small amount of error –Multiple “kinds” of reliability

6 Reliability Test-restest reliability –Test the same participants more than once Measurement from the same person at two different times Should be consistent across different administrations Sensitive to type of measure

7 Reliability Internal consistency reliability –Multiple items testing the same construct –Extent to which scores on the items of a measure correlate with each other Cronbach’s alpha (α) Split-half reliability –Correlation of score on one half of the measure with the other half (randomly determined)

8 Reliability Inter-rater reliability –Extent to which raters agree in their observations Are the raters consistent? –At least 2 raters observe behavior Need a second opinion –Requires some training in judgment

9 Validity Does your measure really measure what it is supposed to measure? –There are many “kinds” of validity


11 Construct Validity Usually requires multiple studies, a large body of evidence that supports the claim that the measure really tests the construct

12 Face Validity At the surface level, does it look as if the measure is testing the construct? “This guy seems smart to me, and he got a high score on my IQ measure.”

13 External Validity Are experiments “real life” behavioral situations, or does the process of control put too much limitation on the “way things really work?”

14 External Validity Variable representativeness –relevant variables for the behavior studied along which the sample may vary Subject representativeness –characteristics of sample and target population along these relevant variables Setting representativeness –ecological validity

15 Internal Validity The precision of the results –Did the change result from the changes in the DV or does it come from something else?

16 Threats to internal validity History – an event happens the experiment Maturation – participants get older (and other changes) Selection – nonrandom selection may lead to biases Mortality – participants drop out or can’t continue Testing – being in the study actually influences how the participants respond

17 “Debugging your study” Pilot studies –A trial run through –Don’t plan to publish these results, just try out the methods Manipulation checks –An attempt to directly measure whether the IV variable really affects the DV. –Look for correlations with other measures of the desired effects.

18 Sampling Why do we do we use sampling methods? –Typically don’t have the resources to test everybody, so we test a subset

19 Sampling Sample Population everybody that the research is targeted to be about the subset of the population that actually participates in the research

20 Sampling Sample Population Inferential statistics used to generalize back Sampling to make data collection manageable

21 Sampling Goals: –Maximize: Representativeness - to what extent do the characteristics of those in the sample reflect those in the population –Reduce: Bias - a systematic difference between those in the sample and those in the population

22 Sampling Methods Probability sampling –Simple random sampling –Systematic sampling –Stratified sampling Non-probability sampling –Convenience sampling –Quota sampling

23 Simple random sampling Every individual has a equal and independent chance of being selected from the population

24 Systematic sampling Selecting every n th person

25 Stratified sampling Step 1: Identify groups (strata) Step 2: randomly select from each group

26 Convenience sampling Use the participants who are easy to get

27 Quota sampling Step 1: identify the specific subgroups Step 2: take from each group until desired number of individuals

28 Experimental Control Our goal: –to test the possibility of a relationship between the variability in our IV and how that affects our DV. –Control is used to minimize excessive variability. –To reduce the potential of confoundings. if there are other variables that influence our DV, how do we know that the observed differences are due to our IV and not some other variable

29 Sources of variability (noise) Sources of Total (T) Variability: T = NonRandom exp + NonRandom other +Random

30 Sources of variability (noise) I. Nonrandom (NR) Variability – systematic variation A. (NR exp )manipulated independent variables (IV) i.our hypothesis is that changes in the IV will result in changes in the DV B. (NR other )extraneous variables (EV) which covary with IV i.other variables that also vary along with the changes in the IV, which may in turn influence changes in the DV

31 Sources of variability (noise) II.Random (R) Variability A. imprecision in manipulation (IV) and/or measurement (DV) B. randomly varying extraneous variables (EV)

32 Sources of variability (noise) Sources of Total (T) Variability: T = NR exp + NR other +R Our goal is to reduce R and NR other so that we can detect NR exp. –That is, so we can see the changes in the DV that are due to the changes in the independent variable(s).

33 Weight analogy Imagine the different sources of variability as weights R NR exp NR other R NR other Treatment groupcontrol group

34 Weight analogy If NR other and R are large relative to NR exp then detecting a difference may be difficult R NR exp NR other R NR other

35 Weight analogy But if we reduce the size of NR other and R relative to NR exp then detecting gets easier R NR other R NR exp NR other

36 Methods of Controlling Variability Comparison Production Constancy/Randomization

37 Methods of Controlling Variability Comparison –An experiment always makes a comparison, so it must have at least two groups Sometimes there are baseline, or control groups This is typically the absence of the treatment –Without control groups if is harder to see what is really happening in the experiment –it is easier to be swayed by plausibility or inappropriate comparisons Sometimes there are just a range of values of the IV

38 Methods of Controlling Variability Production –The experimenter selects the specific values of the Independent Variables (as opposed to allowing the levels to freely vary as in observational studies) –Need to do this carefully Suppose that you don’t find a difference in the DV across your different groups –Is this because the IV and DV aren’t related? –Or is it because your levels of IV weren’t different enough

39 Methods of Controlling Variability Constancy/Randomization –If there is a variable that may be related to the DV that you can’t (or don’t want to) manipulate –Then you should either hold it constant, or let it vary randomly across all of the experimental conditions

40 Potential Problems of Experimental Control Excessive random variability: –If control procedures are not applied, then R component of data will be excessively large, and may make NR undetectable Confounding: –If relevant EV covaries with IV, then NR component of data will be "significantly" large, and may lead to misattribution of effect to IV Dissimulation: –If EV which interacts with IV is held constant, then effect of IV is known only for that level of EV, and may lead to overgeneralization of IV effect

41 Next time Read: Chpt 8

Download ppt "Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology."

Similar presentations

Ads by Google