Presentation is loading. Please wait.

Presentation is loading. Please wait.

Experimental Design The Gold Standard?.

Similar presentations


Presentation on theme: "Experimental Design The Gold Standard?."— Presentation transcript:

1 Experimental Design The Gold Standard?

2 Today’s Goals Identify issues of internal and external validity with various experimental designs Design an experiment for a given topic Critique advantages and disadvantages of different designs

3 Ethical and logistical considerations.
To Review Why is most educational research comprised of non-experimental research designs? Ethical and logistical considerations.

4 To Review What is the purpose of non-experimental research?
It describes current existing characteristics of the topic under study.

5 To Review How does the independent variable function in non-experimental research? It is not manipulated.

6 To Review Can non-experimental research claim causality? NO!

7 An example Read the example given in class and in pairs respond to the questions

8 Experimental Research
Purpose To make causal inferences about the relationship between the independent and dependent variables Characteristics Direct manipulation of the independent variable Control of extraneous variables Eliminate the variable from the study Statistically adjust for the effect of the variable

9 Experimental Designs Single Group Post-test
Single Group Pre-test Post-test Non-Equivalent Groups Post-test Quasi-Experimental Design Randomized Post-test only Randomized Pre-test Post-test Factorial

10 Experimental Validity
Internal validity The extent to which the independent variable, and not other extraneous variables , produced the observed effect on the dependent variable External validity The extent to which the results are generalizable

11 Internal Validity Threats that reduce the level of confidence in any causal conclusions Key Question: Is this a plausible threat to the internal validity of the study?

12 Threats to Internal Validity
History Extraneous events have an effect on the subjects’ performance on the dependent variable The crash of the stock market, 9-11, the invasion of Iraq, etc. Selection Groups that are initially not equal due to differences in the subjects in those groups Positive and negative attitudes, high and low achievers, etc.

13 Threats to Internal Validity
Maturation Changes experienced within the subject over time Pretesting The effect of having taken a pretest Instrumentation Poor technical quality (i.e. validity, reliability) or changes in instrumentation

14 Threats to Internal Validity
Subject attrition Differential loss of subjects from groups Statistical regression The natural movement of extreme scores toward the mean Diffusion of treatment The treatment is given to the control group Experimenter effects Different characteristics or expectations of those implementing the treatments across groups

15 Threats to Internal Validity
Subject effects The effects of being aware that one is involved in a study Four types Hawthorne effect John Henry effect Novelty effect

16 Internal Validity Key Point: Ultimately, validity is a matter of judgment. Ask if it is reasonable that possible threats are likely to affect the results.

17 External Validity The extent to which results can be generalized from a sample to a particular population. Question – Why would really good internal validity often result in poor external validity?

18 External Validity Factors affecting external validity Subjects
Representativeness of the sample in comparison to the population Personal characteristics of the subjects Situations - characteristics of the setting Specific environment Special situation Particular school

19 External Validity Importance of explanation of sampling procedures

20 Experimental Designs Examples Single Group Post-test
Single Group Pre-test Post-test – Non-Equivalent Groups Post-test – Quasi-Experimental Design – Randomized Post-test only – Randomized Pre-test Post-test – Examples

21 Your Task Based on the topic of your proposal, design an experimental study using the design you were assigned. Write a research question and hypothesis. Sketch out the methods. Identify strengths and weaknesses of the design.

22 Experimental Designs Notation
R indicates random selection or random assignment O indicates an observation Test Observation score Scale score X indicates a treatment A, B, C, ... indicates a group

23 Pre-Experimental Designs
No pre-experimental design controls internal validity threats well Single group pretest only A X O Internal validity threats History, maturation, attrition, experimenter effects, subject effects, and instrumentation are viable threats Useful only when the research is sure of the status of the knowledge, skill, or attitude being changed and there are no extraneous variables affecting the results

24 Pre-Experimental Designs
Single group pretest post-test A O X O Internal validity threats Maturation and pretesting are threats History and instrumentation are potential threats Useful when subject effects will not influence the results, history effects can be minimized, and multiple pretests and post-tests are used

25 Pre-Experimental Designs
Non-equivalent groups post-test only A X O B O Internal validity threats Definite Threat: Selection Potential Threats: History, maturation, and instrumentation Useful when groups are comparable and subjects can be assumed to be about the same at the beginning of the study

26 Quasi-Experimental Designs
Types Non-equivalent pretest/post-test, experimental control groups A O X O B O O Non-equivalent pretest/post-test, multiple treatment groups A O X1 O B O X2 O Useful when subjects are in pre-existing groups (e.g. classes, schools, teams, etc.)

27 Quasi-Experimental Designs
Threats to internal validity Selection is the major concern Controls for statistical regression Likely to control for most other threats, provided the groups are not significantly different from one another See Table 9.2 for specific threats related to each design

28 True Experimental Designs
Important terminology Random assignment Subjects placed into groups by random Ensures equivalency of the groups Random selection of subjects Subjects chosen from population by random Ensures generalizability to the population from which the subjects were selected (i.e. external validity)

29 True Experimental Designs
Types Randomized post-test only experimental control groups R A X O R B O Randomized post-test only multiple treatment groups R A X1 O R B X2 O

30 True Experimental Designs
Types (continued) Randomized pretest/post-test multiple treatment groups R A O X1 O R B O X2 O Randomized pretest/post-test experimental control groups R A O X O R B O O

31 True Experimental Designs
Threats to internal validity Controls for selection, maturation, and statistical regression Likely to control for most other threats See Table 9.2 for specific threats related to each design

32 Factorial Designs Research designs containing two or more independent variables Example: A study of the effects of two instructional strategies on male and female students’ math achievement Examples of factorial designs

33 Types of Effects Main effects For each independent variable
i.e., one main effect for instructional strategy and one main effect for math achievement

34 Types of Effects Interaction effects Consider the vitamins you take.
Iron decreases fatigue. Vitamin C decreases stress. Vitamin C boosts the absorption of iron. If you are fatigued and stressed, you may want to take both iron and Vitamin C. The interaction of Vitamin C and iron means you may want to skip an iron supplement when taking Vitamin C.

35 Types of Effects Interaction effects
A different effect for the level of the first independent variable across the levels of the second independent variable i.e., the first instructional strategy could be effective for males but not females, whereas the second instructional strategy could be effective for females but not males One cannot state the effectiveness of the treatment (i.e., instructional strategy) without qualifying it relative to the dependent variable (gender).

36 Evaluating Experimental Designs
Criteria for evaluating experimental research The primary purpose is to test causal hypotheses There should be direct manipulation of the independent variable There should be clear identification of the specific research design

37 Evaluating Experimental Designs
Criteria for evaluating experimental research The design should provide maximum control of extraneous variables Treatments are substantively different from one another The number of subjects is dependent on or equal to the number of treatment replications


Download ppt "Experimental Design The Gold Standard?."

Similar presentations


Ads by Google