Presentation is loading. Please wait.

Presentation is loading. Please wait.

CHAPTER 4 Designing Studies

Similar presentations


Presentation on theme: "CHAPTER 4 Designing Studies"— Presentation transcript:

1 CHAPTER 4 Designing Studies
4.2 Experiments

2 Experiments DISTINGUISH between an observational study and an experiment. EXPLAIN the concept of confounding. DESCRIBE the placebo effect and the purpose of blinding in an experiment. INTERPRET the meaning of statistically significant in the context of an experiment.

3 Observational Study vs. Experiment
An observational study observes individuals and measures variables of interest but does not attempt to influence the responses. An experiment deliberately imposes some treatment on individuals to measure their responses. When our goal is to understand cause and effect, experiments are the only source of fully convincing data. The distinction between observational study and experiment is one of the most important in statistics. In contrast to observational studies, experiments don’t just observe individuals or ask them questions. They actively impose some treatment in order to measure the response.

4 Observational Study vs. Experiment
Read over example on pg. 235 How did ideas about hormone therapy change when experiments were conducted? Why? Confounding occurs when two variables are associated in such a way that their effects on an outcome cannot be distinguished from each other. What could be confounding variables in the example we just read? Well-designed experiments take steps to prevent confounding. Check Your Understanding pg. 237 #1,2,4 Correlation does not imply causation

5 Principles of Experimental Design
The basic principles for designing experiments are as follows: 1. Comparison. Use a design that compares two or more treatments. 2. Random assignment. Use chance to assign experimental units to treatments. Doing so helps create roughly equivalent groups of experimental units by balancing the effects of other variables among the treatment groups. 3. Control. Keep other variables that might affect the response the same for all groups. 4. Replication. Use enough experimental units in each group so that any differences in the effects of the treatments can be distinguished from chance differences between the groups. Randomized comparative experiments are designed to give good evidence that differences in the treatments actually cause the differences we see in the response.

6 Completely Randomized Design
In a completely randomized design, the treatments are assigned to all the experimental units completely by chance. Some experiments may include a control group that receives an inactive treatment or an existing baseline treatment. Group 1 Group 2 Treatment 1 Treatment 2 Compare Results Random Assignment Experimental Units

7 Experiments: What Can Go Wrong?
The logic of a randomized comparative experiment depends on our ability to treat all the subjects the same in every way except for the actual treatments being compared. Good experiments, therefore, require careful attention to details to ensure that all subjects really are treated identically. The response to a dummy treatment is called the placebo effect. In a double-blind experiment, neither the subjects nor those who interact with them and measure the response variable know which treatment a subject received.

8 Placebo Example In a study reported by the New York Times on March 5, 2008 (“More Expensive Placebos Bring More Relief”), researchers discovered that placebos have a stronger effect when they are perceived to be more expensive. The study had volunteers rate the pain of an electric shock before and after taking a new medication. However, half of the subjects were told the medication cost $2.50 per dose, while the other half were told the medication cost $0.10 per dose. In reality, both medications were placebos, and both had a strong effect. Of the “cheap” placebo users, 61% experienced pain relief, while 85% of the “expensive” placebo users experienced pain relief. The researchers suggested that people are accustomed to paying more for better medications, which may account for the difference in response. As with any placebo, it’s all about the expectations of the subjects.

9 Inference for Experiments
In an experiment, researchers usually hope to see a difference in the responses so large that it is unlikely to happen just because of chance variation. We can use the laws of probability, which describe chance behavior, to learn whether the treatment effects are larger than we would expect to see if only chance were operating. If they are, we call them statistically significant. An observed effect so large that it would rarely occur by chance is called statistically significant. A statistically significant association in data from a well-designed experiment does imply causation.

10 Is a result significant?
Fertilizer Activity Is a result significant?

11 The Claim: A new fertilizer (BetterPlant) has been developed that claims to increase the average tomato crop yield over that of an existing fertilizer (GoodGro). How do we know if this is true?

12 The Experiment: To test this claim, a randomized experiment is designed in which tomatoes are planted in 11 plots. The new fertilizer (B) is applied to 6 randomly chosen plots and the old fertilizer (G) is applied to 5 randomly chosen plots.

13 The Experiment - continued
Upon harvesting, the yield of each plot is measured (in lbs.) and the average yield for the plots treated with the new fertilizer is compared to the average yield of those treated with the old fertilizer.

14 The Results Calculate the average yield for each fertilizer.
Plot Fertilizer Yield (in lbs) 1 GoodGro (G) 29.9 2 11.4 3 BetterPlant (B) 26.6 4 23.7 5 25.3 6 28.5 7 14.2 8 17.9 9 16.5 10 21.1 11 24.3 Calculate the average yield for each fertilizer. Find the difference in yield between BetterPlant and GoodGro (B-G).

15 The Results - continued
Fertilizer Mean Yield BetterPlant 22.53 lbs. GoodGro 20.84 lbs. Difference (B – G) 1.69 lbs. Do these results prove the claim that BetterPlant will increase the average tomato crop yield over that of GoodGro?

16 What If? Suppose NEITHER fertilizer was more effective than the other. That is, suppose Plot 2 would have produced 11.4 lbs. of tomatoes no matter which fertilizer was applied. Likewise, Plot 4 would have produced 23.7 lbs. of tomatoes, regardless of fertilizer. How many possible random assignments exist for this situation? What would we observe if we repeated the random assignment of fertilizers to the 11 plots?

17 Supplies Needed: A partner An Activity Mat 6 blue beads 5 green beads
Small cup Calculator Pencil Great Attitude 

18 Simulation Place all colored marbles in the small cup and shake to mix thoroughly. Without looking, pick one marble and place on Plot 1 on the activity mat, continue until you have place a marble on each of the 11 plots. The blue marbles represent BetterPlant and the green marbles represent GoodGro.

19 Activity Each group will perform10 simulations. We will. Then combine the data for all of the groups. Calculate the average of each simulation. Compute the difference between BetterPlant and GoodGro (B – R).

20 How many observed differences are greater than 1.69?
Question #1 How many observed differences are greater than 1.69?

21 Question #2 Assuming there is no difference in the effectiveness of the two fertilizers, what is the estimated probability of observing a difference at least as extreme as 1.69?

22 Question #3 Is the originally observed difference (1.69) convincing evidence that BetterPlant is more effective than GoodGro?

23 Reflection: What can we learn from this activity? Sec. 4.2 Assignment pg. 259 #45, 47a & c, 49, 69, 71


Download ppt "CHAPTER 4 Designing Studies"

Similar presentations


Ads by Google