Presentation is loading. Please wait.

Presentation is loading. Please wait.

Research Methods.

Similar presentations


Presentation on theme: "Research Methods."— Presentation transcript:

1 Research Methods

2 What is the difference? Between a field and natural experiment
Between a lab and a field experiment Between a lab and a natural experiment Between a natural and a quasi experiment Between a correlation and an observation between an observation to collect data and the observational method

3 Reliability This relates to the measure used.
There are 2 types, internal and external. External: the ability to produce the same results all the time. Test using Test-retest reliability. It is unlikely that these will be the same always but you can correlate them and look for the correlation co-efficient Internal: The consistency of a measurement scale. The internal reliability is good if it measures the same thing throughout. Test using the split half method.

4 Improving Reliability
Take more than one score from an individual and average them, so as to avoid random anomalies in scores. Pilot studies test a scale is good and tests the measurement will be replicable Standardisation to improve inter-rater reliability

5 Reliability in practice
The BDI has good test-retest reliability. What does this mean and what kind of reliability does it show? When people took the locus of control test (e.g.) some found that when they completed the 1st half they were internal and external, what issue is here and what method has been used to test it? You are using an observation to collect data, with many researchers. How could you standardise this to improve reliability?

6 Confounding and Extraneous
Extraneous: things that can effect the participant and distort the results e.g. Age, intelligence, temperature etc. Confounding: things that actually get manipulated as IVs and therefore have the primary effect on DV. So extraneous could, confounding actually does!

7 Validity Are the conclusions drawn justified?
Have we measured what we wanted to measure?

8 Internal Validity Internal: does it test what you want it to test? Are we measuring the effect of the IV on DV? Or are we accidently measuring a confounding variable? Give an example of when this could happen.. We need to control demand characteristics, experimenter bias and order effects, even if the measurement scale is valid, if these things are not controlled for the study will become invalid.

9 Improving Internal Validity
Demand characteristics? Single blind. Experimenter bias? Double blind Order effects? Matched pairs or independent measures

10 Testing Validity Face Validity Weakest form
Criterion Validity (concurrent and predictive) Similar findings to other measures and does it predict future performance?

11 External Validity How well can results be generalised outside the study and sample. Population Validity Ecological Validity Measures cannot be unreliable and valid, as an unreliable measure (one which doesn't accurately measure behaviour), will mean low internal validity (as we wont be really testing the IV). We can have reliable results that are not valid though, i.e. They may have good test-retest, but may lack population or ecological validity, or may not really be a justifiable measure of IV.

12 Internal VS External Trade-off
If the internal is well controlled to avoid confounding, the ecological validity is effected therefore the external suffers. What situations would each be more important?

13 BBC Test

14 Ethical Issues

15 Activity 1 P 548

16 Mark our work...

17 Past Paper


Download ppt "Research Methods."

Similar presentations


Ads by Google