Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sampling techniques validity & reliability Lesson 8.

Similar presentations


Presentation on theme: "Sampling techniques validity & reliability Lesson 8."— Presentation transcript:

1 Sampling techniques validity & reliability Lesson 8

2 Exam question: 15 minutes to answer Research has shown that music can affect the ability to concentrate. Design an experiment that could be carried out in a classroom to test the effects of two different kinds of music on a task requiring concentration. You must use a repeated measures design. In your answer you should: fully operationalise the independent and dependent variables provide details of how you would control extraneous variables describe the procedure that you would use. You should provide sufficient detail for the study to be carried out. (10 marks)

3 Some of the most common errors were as follows: Ignoring the requirement to use repeated measures and converting the experiment to an independent groups design Failing to counterbalance order of presentation of the two types of music Producing two concentration tests which were not matched for difficulty Testing music v no music Focusing on trivial controls (breakfast, temperature) and ignoring important ones (volume of music).

4 Individually answer the below question… Sampling techniques (10 minutes) 1.Define sampling? 2.What is a target population? 3.If a sample is representative, what does this really mean? 4.What does sampling error mean? 5.What is sample bias? 6.Why are larger samples generally more representative?

5 Sampling techniques (10 minutes) Sampling methodDefine/ How would you do it? StrengthLimitation Opportunity sample Random sample Volunteer sample

6 Decide which of the 3 sampling techniques is being used in the examples. Students investigating the link between age and attitudes to the legalisation of drugs stop people in the street and ask them their views. A university lecturer request participants for an experiment into how expectation affects perception by placing an advert on the common room notice board. A teacher selects a sample of year 9 students to take part in a test of selective attention by picking every third student from the register.

7 Validity A little boy once said to another ‘Frogs have their ears in their legs, you know. I can prove it.’ ‘Rubbish’ said the other. ‘How could you possibly prove that?’ The first boy proceeded to chop off a frog’s legs and started to shout at the frog ‘Jump! Go on jump!... See, he can’t hear me!’ This experiment has several faults… Ethically tasteless The boy has confounded the IV (ears) with a variable essential for demonstration of the DV (jumping) No control study that frogs can understand English or obey commands.

8 Validity In groups on the piece of paper in-front of you answer these questions: What is validity? What are the types of validity? How do you assess validity? How do you improve validity?

9 Validity: Definition Validity refers to whether or not the investigation measures what it is supposed to measure.

10 What are the types of validity? Internal Whether or not the study is really testing what it is supposed to test. Did the manipulation of the IV cause the observed changes in the DV External to what extent can the findings of the study be generalised across people (population validity) Places (ecological validity) and time periods (historical validity).

11 How do you assess internal validity? Consider the extent to which we have control over variables. Also, specifically for self-reports, observations or other tests there are the following ways of checking validity: Face validity Concurrent validity Predictive validity

12 How do you improve internal validity? Here are some ways to improve internal validity: Control for variables (e.g. demand characteristics, investigator effects, participant variables) Standardisation Randomisation Blind and double blind procedures

13 How do you assess external validity? Consider the 3 types of validity by looking at where the study took place, when it was conducted and who the sample were. Also, you can look at replications of the study - has the study been replicated with the same results in a different environment, at a different time and on a different sample? Meta-analysis

14 How do you improve external validity? To improve external validity: Replicate the study in the same way but in a different environment, at a different time and on a different sample

15 Past exam question and example answer What is meant by ‘validity’? How could the psychologist have assessed the validity of the questionnaire used to measure the severity of symptoms? [4 marks] Eg: ‘Validity refers to whether or not the questionnaire measures what it is supposed to measure (1 mark). Concurrent validity would involve getting a Doctor to assess the symptoms (1 mark) and seeing how closely they match the score on the questionnaire (1 mark). If the two matched, the questionnaire would have high validity (1 mark).

16 Reliability

17 Reliability: Definition If a study can be replicated in the same way and it gains consistent results it can be said to be reliable. Researcher reliability Internal reliability External reliability

18 Researcher reliability Refers to the consistency of the researchers who are gathering the data E.g. In observations or self reports this is called inter-rater reliability Assessing: Compare observations of two observers and conduct a correlation to see if results are similar. Improving: Training, operationalisation, conduct a pilot study.

19 Internal reliability Refers to the consistency of the measures used within the study. Assessing: Split-half method The observation categories or questions on a questionnaire should be split in half randomly, and the two sets of responses for each half compared – they should indicate the same result. This can be more formally assessed by carrying out a correlation Improving: Look at particular questions that seem to be giving a different result and change them until the split half method indicates an improved correlation.

20 External reliability Refers to the extent to which a measure is consistent from one occasion to another. Assessing: Test-retest Giving the same participant the same questionnaire at two different times and giving them no feedback after their first answers; the time gap will need to be sufficient for them not to remember how they answered first time but not so long that they have changed in a way relevant to the questionnaire. The two sets of answers should be correlated Improving: Check individual questions from the test-retest that do not positively correlate, then change alter them and do another test-retest until they positively correlate.

21 TypeResearcher reliability Internal reliabilityExternal reliability Definition How would you assess it? How could you improve it? On the whiteboards…

22 Complete questions on validity and reliability


Download ppt "Sampling techniques validity & reliability Lesson 8."

Similar presentations


Ads by Google