Presentation is loading. Please wait.

Presentation is loading. Please wait.

Experimental Design Research vs Experiment

Similar presentations


Presentation on theme: "Experimental Design Research vs Experiment"— Presentation transcript:

1 Experimental Design Research vs Experiment

2 Research RESEARCH DESIGN (a plan for research) A careful search
An effort to obtain new knowledge in order to answer a question or to solve a problem A protocol for measuring the values of a set of variables (response variables) under a set of condition (study condition) RESEARCH DESIGN (a plan for research)

3 Research Designs Naturalistic observation Case study Correlational
Differential Experimental Constraint level

4 Research design Observation
A design in which the levels of all the explanatory variables are determined as part of the observational process Experimental A study in which the investigator selects the levels of at least one factor An investigation in which the investigator applies some treatments to experimental units and then observes the effect of the treatments on the experimental units by measuring one or more response variables An inquiry in which an investigator chooses the levels (values) of input or independent variables and observes the values of the output or dependent variable(s).

5 Strengths of observation
Can be used to generate hypotheses Can be used to negate a proposition Can be used to identify contingent relationships

6 Limitations of Observation
Cannot be used to test hypotheses Poor representative ness Poor replicability Observer bias

7 Strengths of experimental
Causation can be determined (if properly designed) The researcher has considerable control over the variables of interest Can be designed to evaluate multiple independent variables

8 Limitations of experimental
Not ethical in many situations Often more difficult and costly

9 Design of Experiments Define the objectives of the experiment and the population of interest. Identify all sources of variation. Choose an experimental design and specify the experimental procedure.

10 Defining the Objectives
What questions do you hope to answer as a result of your experiment? To what population do these answers apply? INTERNAL USE FIG. 02s02f04 INTERNAL USE FIG. 02s02f03

11 Defining the Objectives
INTERNAL USE FIG. 02s02a

12 Identifying Sources of Variation
INTERNAL USE FIG. 02s02f05 Input Variable Output Variable

13 Choosing an Experimental Design

14 Experimental Design A plan and a structure to test hypotheses in which the analyst controls or manipulates one or more variables Protocol for measuring the values of a set of variable It contains independent and dependent variables 4

15 What is a statistical experimental design?
Determine the levels of independent variables (factors) and the number of experimental units at each combination of these levels according to the experimental goal. What is the output variable? Which (input) factors should we study? What are the levels of these factors? What combinations of these levels should be studied? How should we assign the studied combinations to experimental units?

16 The Six Steps of Experimental Design
Plan the experiment. Design the experiment. Perform the experiment. Analyze the data from the experiment. Confirm the results of the experiment. Evaluate the conclusions of the experiment.

17 Plan the Experiment Identify the dependent or output variable(s).
Translate output variables to measurable quantities. Determine the factors (input or independent variables) that potentially affect the output variables that are to be studied. Identify potential combined actions between factors.

18 Syllabus Content Week Terminology and basic concept 2
T-test, anova and CRD 3 RCBD and Latin Square 4 Mean comparison 7 Midterm 8 - 9 10 Factorial experiment Special topic in factorial experiment 13

19 B – D → 45 – 80 (Normal distribution)
Grading system Grade : 0 – 100 A > 80 B – D → 45 – 80 (Normal distribution) E < 45 Grade composition Assignment : 30 67 Mid-term Final Exam 40 Practical Work 33

20 Treatment/ input/independent variable
Terminology Variable a characteristic that varies (e.g., weight, body temperature, bill length, etc.). Treatment/ input/independent variable A condition or set of conditions applied to experimental units in an experiment. The variable that the experimenter either controls or modifies. What you manipulate Single factor ≥ 2 factors 4

21 Terminology Factors Another name for the independent variables of an experimental design An explanatory variable whose effect on the response is a primary objective of the study A variable upon which the experimenter believes that one or more response variables may depend, and which the experimenter can control An explanatory variable that can take any one of two or more values. The design of the experiment will largely consist of a policy for determining how to set the factors in each experimental trial 4

22 Terminology Levels or Classifications
The subcategories of the independent variable used in the experimental design The different values of a factor Dependent/response/output variable The response to the different levels of the independent variables A characteristic of an experimental unit that is measured after treatment and analyzed to assess the effects of treatments on experimental units

23 Full Factorial Treatment Design
Terminology Treatment Factor A factor whose levels are chosen and controlled by the researcher to understand how one or more response variables change in response to varying levels of the factor Treatment Design The collection of treatments used in an experiment. Full Factorial Treatment Design Treatment design in which the treatments consist of all possible combinations involving one level from each of the treatment factors.

24 Terminology Experimental unit
the unit of the study material in which treatment is applied The smallest unit of the study material sharing a common treatment The physical entity to which a treatment is randomly assigned and independently applied. Observational unit (sampling unit) The smallest unit of the study material for which responses are measured. The unit on which a response variable is measured. There is often a one-to-one correspondence between experimental units and observational units, but that is not always true.

25 Populations and Samples
the entire collection of values for the variable being considered. Sample a subset of the population. Statistically, it is important for the sample to be a random sample.

26 Parameters vs. Statistics
a measure that characterizes a population. Statistic an estimate of a population parameter, based on a sample.

27 Basic principles Formulate question/goal in advance Comparison/control
Replication Randomization Stratification (blocking) Factorial experiment

28 Comparison/control Good experiments are comparative.
Compare BP in mice fed salt water to BP in mice fed plain water. Compare BP in strain A mice fed salt water to BP in strain B mice fed salt water. Ideally, the experimental group is compared to concurrent controls (rather than to historical controls).

29 Replication Applying a treatment independently to two or more experimental units.

30 Why replicate? Reduce the effect of uncontrolled variation (i.e. increase precision). Quantify uncertainty.

31 Randomization Random assignment of treatments to experimental units.
Experimental subjects (“units”) should be assigned to treatment groups at random. At random does not mean haphazardly. One needs to explicitly randomize using A computer, or Coins, dice or cards.

32 Why randomize? Avoid bias. Control the role of chance.
For example: the first six mice you grab may have intrinsically higher BP. Control the role of chance. Randomization allows the later use of probability theory, and so gives a solid foundation for statistical analysis.

33 Stratification (Blocking)
Grouping similar experimental units together and assigning different treatments within such groups of experimental units If you anticipate a difference between morning and afternoon measurements: Ensure that within each period, there are equal numbers of subjects in each treatment group. Take account of the difference between periods in your analysis.

34 Completely randomized design
Cage positions Completely randomized design

35 Randomized block design
Cage positions Randomized block design

36 Randomization and stratification
If you can (and want to), fix a variable. e.g., use only 8 week old male mice from a single strain. If you don’t fix a variable, stratify it. e.g., use both 8 week and 12 week old male mice, and stratify with respect to age. If you can neither fix nor stratify a variable, randomize it.

37 Types of Experimental Designs
Simple Designs: Vary one factor at a time Not statistically efficient. Wrong conclusions if the factors have interaction. Not recommended.

38 Types of Experimental Designs
Factorial Experiment: 1. Full Factorial Design: All combinations. Can find the effect of all factors. Too much time and money. May try 2k design first

39 Types of Experimental Designs
2. Fractional Factorial Designs: Save time and expense. Less information. May not get all interactions. Not a problem if negligible interactions.

40 Common Mistakes in Experimentation
1. The variation due to experimental error is ignored. 2. Important parameters are not controlled. 3. Effects of different factors are not isolated. 4. Simple one-factor-at-a-time designs are used 5. Interactions are ignored. 6. Too many experiments are conducted. Better: two phases.

41 Interaction Effect of one factor depends upon the level of the other.
Non-interacting Factors Interacting Factors

42 Full Factorial experiments
Suppose we are interested in the effect of both salt water and a high-fat diet on blood pressure. Ideally: look at all 4 treatments in one experiment. Plain water Normal diet Salt water High-fat diet Why? We can learn more. More efficient than doing all single-factor experiments. ×

43 Interactions

44 Other points Blinding Internal controls
Measurements made by people can be influenced by unconscious biases. Ideally, dissections and measurements should be made without knowledge of the treatment applied. Internal controls It can be useful to use the subjects themselves as their own controls (e.g., consider the response after vs. before treatment). Why? Increased precision.

45 Other points Representativeness
Are the subjects/tissues you are studying really representative of the population you want to study? Ideally, your study material is a random sample from the population of interest.

46 Distribution of D when  = 0
Significance test Compare the BP of 6 mice fed salt water to 6 mice fed plain water.  = true difference in average BP (the treatment effect). H0:  = 0 (i.e., no effect) Test statistic, D. If |D| > C, reject H0. C chosen so that the chance you reject H0, if H0 is true, is 5% Distribution of D when  = 0

47 Statistical power Power:
The chance that you reject H0 when H0 is false (i.e., you [correctly] conclude that there is a treatment effect when there really is a treatment effect).

48 Power depends on… The structure of the experiment
The method for analyzing the data The size of the true underlying effect The variability in the measurements The chosen significance level () The sample size Note: We usually try to determine the sample size to give a particular power (often 80%).

49 Effect of sample size 6 per group: 12 per group:

50 Various effects Desired power   sample size 
Stringency of statistical test   sample size  Measurement variability   sample size  Treatment effect   sample size 


Download ppt "Experimental Design Research vs Experiment"

Similar presentations


Ads by Google