Presentation is loading. Please wait.

Presentation is loading. Please wait.

Common Designs and Quality Issues in Quantitative Research Research Methods and Statistics.

Similar presentations


Presentation on theme: "Common Designs and Quality Issues in Quantitative Research Research Methods and Statistics."— Presentation transcript:

1 Common Designs and Quality Issues in Quantitative Research Research Methods and Statistics

2 Intended Learning Outcomes  To familiarise yourself with the different types of quantitative research designs commonly used in occupational psychology research  To understand the concepts of validity and reliability and why these are important to consider when designing research studies

3 What is Research Design “A design specifies the logical structure of a research project and the plan that will be followed in the execution. It determines whether a study is capable of obtaining an answer to the research question in a manner consistent with the appropriate research methodology and the theoretical and philosophical perspectives underlying the study.” (Sim & Wright, 2000: 27)

4 Elements of Research Designs  phenomena/variables to be researched  how will these phenomena/variables be measured? (what method/technique?)  who/where will the data be collected from?  when will the data be collected?  what type of data will I have as a result?  what will be the consequences of this for data analysis?

5 Elements of Research Designs  phenomena/variables to be researched  how will these phenomena/variables be measured? (what method/technique?)  who/where will the data be collected from?  when will the data be collected?  what type of data will I have as a result?  what will be the consequences of this for data analysis?

6 Common Designs  Group differences  Relationships between variables: correlations regression models  Surveys / questionnaires  Time series  Other designs

7 Group Differences INTERVENTION PRE CONTROL PRE INTERVENTION POST CONTROL POST e.g. to determine the effect of a training intervention on scores

8

9 Group Differences Designs - Variations  No control group  More than two groups  More than one outcome measure  No time element  More than two time points  Etc.

10 Relationships between Variables  Bivariate relationships each participant is measured on two or more variables (either both are categorical or both are ordinal or above)  Regression models based on linear correlations various predictor variables and one outcome variable

11 Bivariate Relationships – Categorical Data PUBLIC SCHOOL PRIVATE SCHOOL READING DIFFICULTI ES 82 NO READING DIFFICULTI ES 2422 e.g. to find out whether the proportion of pupils with reading difficulties varies from public to private schools

12 Bivariate Relationships – Ordinal, Interval or Ratio Data e.g. to find out how the amount of TV viewing is correlated with academic performance

13 Regression e.g. which are the best predictors of academic performance? e.g. which are the best predictors for whether a child will get a statement of educational needs? PREVIOUS SAT SCORE GENDER FREE MEALS TV VIEWING ATTENDANCE RECORD ACADEMIC PERFORMANCE

14 Surveys / Questionnaires  May be used: as an outcome measure (evaluation) to describe (the attitudes of) a particular group – SURVEY  Surveys can be used to check for: differences between groups relationships between variables

15 Time Series  multiple data points (50+) – recorded data  useful for evaluation when trend and/or seasonality are existent

16 Other Designs  Single case designs A A BB

17 Which One to Choose…??? Your choice of study design needs to take into account:  your research question  data available / feasible tests available  other details: trends / seasonality existent?

18 Making Sure your Study is a Good Quality One Just two thing to worry about…  High internal and external validity  Validity and reliability of instruments

19 Internal and External Validity  Interval validity refers to the lack of confounding variables (related to design) (e.g. can we really conclude the children’s reading performance has improved because of our IV – intervention we introduced?)  External validity refers to whether we can generalise our results to our target population (related to sampling)

20 Threats to Internal Validity REGRESSION TO THE MEAN MORTALITY COMPENSATORY RIVALRY EXPERIMENTER BIAS DIFFUSION OF BENEFIT MATURATION

21 External Validity  Can we generalise our findings to other people/places/settings/conditions/etc.?  Related to: artificiality does the experimental situation resemble the real world? sample selection is our sample different from the population you want to apply our findings to?

22 High Quality Instruments  Validity: Does your test measure what it claims to?  Reliability: Does it measure it consistently? Not reliable therefore not valid Reliable but not valid Both reliable and valid Reproduced from Trochim (2002) on http://www.socialresearchmethods.net/kb/reliability.htm

23 Relationship between Validity and Reliability RELIABILITY VALIDITY random error systematic error

24 When Is Quality Compromised?  Ethics  Practical issues THINK ABOUT…  How do validity and ethics relate to one another?  Is it ethical to sacrifice validity in a study to make it more ethical?


Download ppt "Common Designs and Quality Issues in Quantitative Research Research Methods and Statistics."

Similar presentations


Ads by Google