Presentation is loading. Please wait.

Presentation is loading. Please wait.

Estimation of authenticity of results of statistical research.

Similar presentations


Presentation on theme: "Estimation of authenticity of results of statistical research."— Presentation transcript:

1 Estimation of authenticity of results of statistical research

2 n The necessity estimation of authenticity of results is determined by volume of research. In full research (general aggregate), when all units of supervision are explored it is possible to get only one value of certain index. The general aggregate is always reliable because in it included her all units of supervision are included. General aggregate official statistics can exemplify.

3 n The general aggregate is rarely used in medical-biologic research, mainly part of researches is selective. The law of large numbers is basis for forming of reliable selective aggregate. It sounds so: it is possible to assert with large authenticity, that at achievement of large number of supervisions average of sign, which is studied in a selective aggregate will be a little to differ from an average which is studied at all general aggregate.

4 n The selective aggregate always has errors, because not all units of supervision are included in research. Authenticity of selective research depends from the size of this error. That is why greater number of supervisions, teed to less error, the less sizes of casual vibrations of index. That, to decrease an error it is needed to multiply the number of supervisions.

5 Basic criteria of authenticity (representation): n Error of representation (w) n Confiding scopes n The coefficient of authenticity (the student criterion) is authenticity of difference of middle or relative sizes (t)

6 Basic criteria of authenticity (representation): n The errors of representation of /m/ are the degree of authenticity of average or relative value shows how much the results of selective research differ from results which it is possible to get from continuous study of general aggregate.

7 Basic criteria of authenticity (representation): n Confiding scopes – properties of selective aggregate are carried on general one, probability oscillation of index is shown in the general aggregate, its extreme values of minimum and maximal possibility, which the size of general aggregate can be within the limits of.

8 Basic criteria of authenticity (representation): n The coefficient of authenticity (the Student’s criterion) is authenticity of difference of middle or relative sizes (t). The student’s Criterion shows the difference of the proper indexes in two separate selective aggregates.

9 The use of averages in health protection n for description of work organization of health protection establishments (middle employment of bed, term of stay in permanent establishment, amount of visits on one habitant and other);

10 The use of averages in health protection n for description of indices of physical development (length, mass of body, circumference of head of new-born and other);

11 The use of averages in health protection n for determination of medical-physiology indices of organism (frequency of pulse, breathing, level of arterial pressure and other);

12 The use of averages in health protection n for estimation of these medical-social and sanitary-hygienic researches (middle number of laboratory researches, middle norms of food ration, level of radiation contamination and others).

13 Averages n Averages are widely used for comparison in time, that allows to characterize the major conformities to the law of development of the phenomenon. So, for example, conformity to the law of growth increase of certain age children finds the expression in the generalized indices of physical development. Conformities to the law of dynamics (increase or diminishment) of pulse rate, breathing, clinical parameters at the certain diseases find the display in statistical indices which represent the physiology parameters of organism and other.

14 Average Values n Mean:  the average of the data  sensitive to outlying data n Median:  the middle of the data  not sensitive to outlying data n Mode:  most commonly occurring value n Range:  the difference between the largest observation and the smallest n Interquartile range:  the spread of the data  commonly used for skewed data n Standard deviation:  a single number which measures how much the observations vary around the mean n Symmetrical data:  data that follows normal distribution  (mean=median=mode)  report mean & standard deviation & n n Skewed data:  not normally distributed  (mean  median  mode)  report median & IQ Range

15 Average Values n Limit is it is the meaning of edge variant in a variation row lim = Vmin Vmax

16 Average Values n Amplitude is the difference of edge variant of variation row Am = Vmax - Vmin

17 Average Values n Average quadratic deviation characterizes dispersion of the variants around an ordinary value (inside structure of totalities).

18 Average quadratic deviation σ = simple arithmetical method

19 Average quadratic deviation d = V - M genuine declination of variants from the true middle arithmetic

20 Average quadratic deviation σ = i method of moments

21 Average quadratic deviation is needed for: 1. Estimations of typicalness of the middle arithmetic (М is typical for this row, if σ is less than 1/3 of average) value. 2. Getting the error of average value. 3. Determination of average norm of the phenomenon, which is studied (М±1σ), sub norm (М±2σ) and edge deviations (М±3σ). 4. For construction of sigmal net at the estimation of physical development of an individual.

22 Average quadratic deviation This dispersion a variant around of average characterizes an average quadratic deviation (  )

23 n Coefficient of variation is the relative measure of variety; it is a percent correlation of standard deviation and arithmetic average.

24 Terms Used To Describe The Quality Of Measurements n Reliability is variability between subjects divided by inter-subject variability plus measurement error. n Validity refers to the extent to which a test or surrogate is measuring what we think it is measuring.

25 Measures Of Diagnostic Test Accuracy n Sensitivity is defined as the ability of the test to identify correctly those who have the disease. n Specificity is defined as the ability of the test to identify correctly those who do not have the disease. n Predictive values are important for assessing how useful a test will be in the clinical setting at the individual patient level. The positive predictive value is the probability of disease in a patient with a positive test. Conversely, the negative predictive value is the probability that the patient does not have disease if he has a negative test result. n Likelihood ratio indicates how much a given diagnostic test result will raise or lower the odds of having a disease relative to the prior probability of disease.

26 Measures Of Diagnostic Test Accuracy

27 Expressions Used When Making Inferences About Data n Confidence Intervals -The results of any study sample are an estimate of the true value in the entire population. The true value may actually be greater or less than what is observed. n Type I error (alpha) is the probability of incorrectly concluding there is a statistically significant difference in the population when none exists. n Type II error (beta) is the probability of incorrectly concluding that there is no statistically significant difference in a population when one exists. n Power is a measure of the ability of a study to detect a true difference.

28 Multivariable Regression Methods n Multiple linear regression is used when the outcome data is a continuous variable such as weight. For example, one could estimate the effect of a diet on weight after adjusting for the effect of confounders such as smoking status. n Logistic regression is used when the outcome data is binary such as cure or no cure. Logistic regression can be used to estimate the effect of an exposure on a binary outcome after adjusting for confounders.

29 Survival Analysis n Kaplan-Meier analysis measures the ratio of surviving subjects (or those without an event) divided by the total number of subjects at risk for the event. Every time a subject has an event, the ratio is recalculated. These ratios are then used to generate a curve to graphically depict the probability of survival. n Cox proportional hazards analysis is similar to the logistic regression method described above with the added advantage that it accounts for time to a binary event in the outcome variable. Thus, one can account for variation in follow-up time among subjects.

30 Kaplan-Meier Survival Curves

31 Why Use Statistics?

32 Descriptive Statistics n Identifies patterns in the data n Identifies outliers n Guides choice of statistical test

33 Percentage of Specimens Testing Positive for RSV ( respiratory syncytial virus)

34 Descriptive Statistics

35 Distribution of Course Grades

36 Describing the Data with Numbers Measures of Dispersion RANGE STANDARD DEVIATION SKEWNESS

37 Measures of Dispersion RANGE highest to lowest values STANDARD DEVIATION how closely do values cluster around the mean value SKEWNESS refers to symmetry of curve

38 Measures of Dispersion RANGE highest to lowest values STANDARD DEVIATION how closely do values cluster around the mean value SKEWNESS refers to symmetry of curve

39 Measures of Dispersion RANGE highest to lowest values STANDARD DEVIATION how closely do values cluster around the mean value SKEWNESS refers to symmetry of curve

40 The Normal Distribution n Mean = median = mode n Skew is zero n 68% of values fall between 1 SD n 95% of values fall between 2 SDs. Mean, Median, Mode 11 22


Download ppt "Estimation of authenticity of results of statistical research."

Similar presentations


Ads by Google