Presentation is loading. Please wait.

Presentation is loading. Please wait.

Experimental Design Dr. Anne Molloy Trinity College Dublin.

Similar presentations


Presentation on theme: "Experimental Design Dr. Anne Molloy Trinity College Dublin."— Presentation transcript:

1 Experimental Design Dr. Anne Molloy Trinity College Dublin

2 Ethical Approach to Animal Experimentation Replace Reduce Refine Reduce Good Experimental Design Appropriate Statistical Analysis

3 Good Experimental Design is Essential in Experiments using Animals The systems under study are complex with many interacting factors High variability can obscure important differences between treatment groups: – Biological variability (15-30% CV in animal responses) – Experimental imprecision (up to 10% CV). Confounding variables between treatment groups can affect your ability to interpret effects. – Is a difference due to the treatment or a secondary effect of the treatment? (e.g. weight loss, lack of appetite)

4 Do a Pilot Study and Generate Preliminary Data for Power Calculations. Observational study -not an experiment; an experience (RA Fisher 1890-1962) Observational; generates data to give you the average magnitude and variability of the measurements of interest Gives background information on the general feasibility of the project (essentially validates the hypothesis) Allows you to get used to the system you will be working with and get information that might improve the design of the main study

5 Dealing with Subject Variation Choose genetically uniform animals where possible Avoid clinical and sub-clinical disease Standardize the diet and environment - house under optimal conditions Uniform weight and age (else choose a randomized block design) Replicate a sufficient number of times. – Increases the confidence of a genuine result – Allows outliers to be detected.

6 Some issues to think about before you set out to test your hypothesis What is the best treatment structure to answer the question? – Scientifically – Economically What type of data are being collected? – Categorical, numerical (discrete or continuous), ranks, scores or ratios. This will determine the statistical analysis to be used How many replicates will be needed per group? – Too many: wasteful; diminishing additional information – Too few: Important effects can be rejected as non- significant

7 Choosing the Correct Design How many treatments (independent variables)? – e.g. Dose Response over Time How many outcome measurements (dependent variables) – Aim for the maximum amount of informative data from each experiment – (but power for one) Are there important confounding factors that should be considered? – Gender, age – Dose Response over Time x Gender x Age Complex experiments with more treatment groups generally allow reduction in the number of animals per group. Continuous numerical type data generally require smaller sample sizes than categorical data

8 Types of Study Design Completely randomized study (basic type) – Random not haphazard sampling Randomized block design: e.g. stratify by weight or age. (removes systematic sources of variation) Factorial Design: e.g examine two or more independent variables in one study Crossover, sequential, repeated measures, split plot, latin square designs Can greatly reduce the number of animals required – ANOVA type analysis is essential

9 Example: You want to examine the effect of two well known drugs on tissue bio- markers Control Drug 1 Experiment 1 Control Drug 2 Experiment 2 ControlDrug 2Drug 1 Reduces animals by the number of controls in one experiment

10 Identify the Experimental Unit SalineDrugSalineDrug Control DietExperimental Diet Defines the independent unit of replication Cage; animal; tissue Sometimes pseudoreplication is unavoidable – so be aware of effect limitations

11 Power and Sample Size Calculation in the Design of Experiments What is the likelihood that the statistical analysis of your data will detect a significant effect given that the experimental treatment truly has an effect? POWER How many replicates will be needed to allow a reliable statistical judgement to be made? SAMPLE SIZE

12 The Information You Need What is the variability of the parameter being measured? What effect size do you want to see? What significance level to you want to detect (commonly use minimum of p=0.05)? What power do you want to have (commonly use 0.80) This information is used to calculate the sample size

13 Variability of the Parameter An estimate of central location: “About how much?” (e.g. the mean value) An estimate of variation: “How spread out?” (e.g. the standard deviation)

14 An experiment: Testing the difference between two means In an experiment we often want to test the difference between two means where the means are sample estimates based on small numbers. It is easier to detect a difference if: – The means are far apart – There is a low level of variability between measurements – There is good confidence in the estimate of the mean

15 Plasma Cysteine (µmol/L) Mean 235: SD 22.9: Mean 236: SD 22.3 Mean 236: SD 23.8 SEM= SD/  N SEM=22.9/  20 =5.12SEM=22.3/  50 =3.15 SEM=23.8/  2500 =0.48 Mean 235: SD 21.4 SEM=21.4/  500 =0.96 Means and SDs are about the same! Coefficient of Variation (CV) (SD/Mean)% = 10.1%

16 Mean = 236 SD = 23.8 2SD = 48 (approx) 3SD = 71 (approx) About 95% of results are between 236 ± 48 i.e. 188 and 284 About 99.7% of results are between 236 ± 71 i.e. 165 and 307 We can make the same predictions for a sample mean using SEM instead of SD

17 Having confidence in the estimate of the mean value Mean 235: SEM=5.12 2 SEMs = 10.24 We can be 95% confident that true mean of the population will fall between 224.8 and 245.2 This is a sample. We don’t know the ‘true’ mean of the population The sample mean is our best guess of the true population mean (µ) but with a small sample there is much uncertainty and we need a wide margin of error

18 The effect of increasing numbers Mean 235: SD=22.9 Mean 235: SD=21.4  20 = 4.47 SEM=5.12  500 = 22.36 SEM=0.96 Number of samples is increased 25 times Standard error is decreased by  25 = 5 times 95% CI of the mean is 5 times narrower

19 Sample size considerations: Viewpoint 1 Fix the sample size at six replicates per group and CV at 10% The significance depends on the effect size Effect you want in treated group Student’s t-test 50% differenceP<0.0001 25%P=0.0009 15%P=0.015 12%P=0.048 10%P=0.09 2 groups – control and treated 6 replicates per group; CV of the assay 10% Cut-off for a significant result P=0.05 ( Mean of treated outside the 95% CI of the controls) P=0.05

20 Sample size considerations: Viewpoint 2 Fix the effect size at 25% difference and CV at 10%. The significance depends on the number of replicates Number of replicates per group Student’s t-test 6P=0.0009 5P=0.0029 4P=0.009 3P=0.03 2P=0.12 25% difference expected; CV of the assay 10% Cut-off for a significant result P=0.05 P=0.05

21 Sample size considerations: Viewpoint 3 Fix the effect size at 25% and number of replicates at 6. The significance depends on the variability of the data (CV) CV of the AssayStudent’s t-test 10%P=0.0009 15%P=0.009 20%P=0.037 25%P=0.08 30%P=0.14 25% difference expected; 6 replicates per group Cut-off for a significant result P=0.05 P=0.05

22 Summary: The underlying issues in demonstrating a significant effect The size of the effect you are interested in seeing – Big – e.g. 50% difference will be seen with very few data points – Small - major considerations The precision of the measurement – Low CVs - few replicates needed – High CVs – multiple replicates

23 How do we interpret a non- significant result? A.There is no difference between the groups B.There is a difference but we didn’t see it (because of low numbers, SD too wide, etc.) The decision to reject or not reject the Null Hypothesis can lead to two types of error.

24 Interpreting a Statistical Test Correct Decision α (Type 1) Error (p value) Reject H(o) Declare that the treatment has an effect Significant β (Type II) Error Correct Decision Do not reject H(o) Declare that the treatment has no effect Not Significant The Null is false The treatment has an effect The Null is true The treatment has no effect DECISION RESULTS Reality Evidence from the experiment

25 β-Errors and the overlap between two sample distributions Mean sample A Continuous data range 95% CI of mean A95% CI of mean B Mean sample B Miss an effect: β-error See an effect: POWER

26 Some Power Calculators http://www.dssresearch.com/toolkit/spcalc/power.asp http://statpages.org/ leads to Java applets for power and sample size calculations. http://www.stat.uiowa.edu/%7Erlenth/Power/index.html Direct into Java applet site

27 General Formula r = 16 (CV/d) 2 r= No of replicates CV= coefficient of variation (SD/mean) (as a percent) d=difference required (as a percent) Valid for a Type I error =5% and Type II error =80%.

28 Some General Comments on Statistics All statistical tests make assumptions – They assume independent data points –ignore this at your peril! – They assume that the data are a good representation of the wider experimental series under study – Some assumptions are very specific to the test being carried out

29 Final Thoughts Ideally, to minimise the sample number, use equal numbers of control and treated animals. Ethically, if an experiment is particularly stressful, lower numbers may be desired in the treated group. This requires use of more animals overall to gain equivalent power – but can be justified. Remember - Statistical tests assume that the experiment has been done on a random sample from the complete population of similar items and that each result is an independent event. This is often not the case in laboratory research. Statistical logic is only part of the data interpretation. Scientific judgement and common sense are essential.

30

31 Dealing with Experimental Variation Randomization – Essential! – Ensures that the remaining inescapable differences are spread among all the treatment groups – Minimises potential bias – Provides a reliable estimate of the true variability – “Control treatment” must be one of the randomized arms of the experiment

32 Power Considerations You know the variability of the parameter being measured What effect size do you want to see? You need a minimum significance level of p=0.05 What power do you want to have (commonly use 0.80)


Download ppt "Experimental Design Dr. Anne Molloy Trinity College Dublin."

Similar presentations


Ads by Google