Presentation is loading. Please wait.

Presentation is loading. Please wait.

Instrumentation: Performance & Surveys 47.269: Research I Spring 2010 Dr. Leonard.

Similar presentations


Presentation on theme: "Instrumentation: Performance & Surveys 47.269: Research I Spring 2010 Dr. Leonard."— Presentation transcript:

1 Instrumentation: Performance & Surveys 47.269: Research I Spring 2010 Dr. Leonard

2 Measuring performance Can measure optimal or typical performance Optimal - achievement, aptitude, intelligence tests Typical - everyday behavior, personality, attitudes Could measure performance through observation, physiological measures, or self-report tests/measures Self-report is most common in psychology Surveys and interviews are two different forms of self-report measures Used in research that is… Descriptive Experimental Non-experimental/Correlational

3 Scales of measurement Based on what you are measuring (e.g., behavior, attitudes, emotion, etc.), you may use a different scale of measurement Degree of specificity in your data Either discrete or continuous variables Discrete variables measured as units on scale with no value in between Continuous variables can be any value along a scale to infinity Scales of measurement:Nominal CategoriesOrdinal Categories that can be rankedInterval Scores with equidistant intervals between themRatio Scores with equidistant intervals and absolute zero

4 Scales of measurement NominalOrdinalIntervalRatio

5 Self-report designs: Pros Relatively easy way to collect large amounts of data very quickly Surveys are cheap and can be self-administered (e.g., online) Written surveys can be given to a large number of people at the same time and can be anonymous, which may promote honest responses Interviews are expensive but increase response rate and allow for better understanding of questions

6 Self-report designs: Cons Data are subject to bias, social desirability, demand characteristics, and response sets, which all affect the validity of findings Bias - researcher’s or participant’s Social desirability - responding in a way that would be seen as socially acceptable, especially when the topic is sensitive Demand characteristics - fatigue, memory burden, confusion Response sets - straight-line, extremes, right down middle Interviews may taint data if participant is trying to impress interviewer or if the interviewer asks questions in a biased way Option: telephone interviews

7 Mistakes to avoid with survey questions Mitchell & Jolly (2007) 1. Leading questions 2. Questions that invite social desirability 3. Double-barreled questions 4. Long questions 5. Negations 6. Irrelevant questions 7. Poorly worded response options 8. Big words 9. Ambiguous words and phrases

8 Planning a survey 1. Determine all questions you need answered, create a list 2. Choose the appropriate format for your questions 3. Edit questions for clarity 4. Sequence your questions effectively to avoid order or demand effects May help to counterbalance the order! 5. Pilot survey and refine questions 6. Choose appropriate sampling strategy

9 Formatting survey questions Decide which format will best fit the type of data you want (N, O, I, R) Close-ended Dichotomous, Yes/No Reliable May be less powerful Multiple choice, categories Caution not to treat them as interval Likert scales Offers choices of response along a spectrum May be less reliable, ambiguity of points along scale More powerful, more sophisticated analyses Open-ended May result in numeric or quantifiable data May provide more rich, explanatory information May not be answered as you intended, coding difficulties

10 Scales of measurement NominaldiscreteOrdinaldiscreteIntervaldiscreteRatiocontinuous

11 Sequencing survey questions Overall goal is to increase response rate and accuracy of responses Put innocuous questions first and personal questions last Put demographic questions last Keep similar topic questions together Keep questions with similar response options together You may want to provide response scale only once to save space You may want to use reverse scoring (change direction/meaning) to keep participants engaged and to test whether they are responding consistently

12 Good example of reverse scoring: Rosenberg Self-Esteem Scale 1= Strongly Disagree, 2, 3, 4= Neutral, 5, 6, 7 = Strongly Agree _____1. At times I think I am no good at all.* _____2. I feel that I have a number of good qualities. _____3. All in all, I am inclined to think that I am a failure.* _____4. I am able to do things as well as most people. _____5. I feel that I do not have much to be proud of.* _____6. I take a positive attitude towards myself. _____7. On the whole, I am satisfied with myself. _____8. I wish I could have more respect for myself.* _____9. I certainly feel useless at times.* _____10. I feel that I am a person of worth, at least on an equal basis with others. *Reverse scored (1, 3, 5, 8, and 9)

13 Piloting and refining questions Check for overall professional appearance of survey and ease of navigating questions Avoid skips when possible Don’t let question run across 2 pages Build in breaks? PROOFREAD-mistakes are distractions! Pilot survey Carefully consider who would provide honest feedback about the questions Try to create a pilot sample that would represent your target sample Practice coding responses because that may lead to further refining that makes analysis easier

14 Minimizing social desirability Patten recommends 1) observing instead or 2) using projective techniques -Ambiguous task or question participant can’t guess meaning of (e.g., ink blot) Interpersonal techniques: Professional conduct; increase respect for research Create a sense of comfort and security Reassure participants of confidentiality of responses Never look at the measure in front of them Keep multiple participants separated during measurement Immediately separate any identifiers from data

15 Minimizing social desirability Measurement techniques: Place most personal questions mid- to late- in questionnaire Do not put measures you are relating adjacent to each other Include some fillers or distracting questions Include social desirability test items “I never swear” “I typically try to help those in need”

16 Interviews Unstructured Interview bias is a problem Data may be simple to analyze Semi-structured (most common) Follow-up questions allowed Probably best for pilot studies on new topic Structured (reading a survey) Standardized Reduce interviewer bias

17 Delivery of measures: Person to person Pros: Higher return rate Make take questions more seriously after meeting researcher Participants can ask for clarification Researcher can read body language Cons: Researcher may bias participants’ response Greater chance for social desirability Participants may feel less anonymous; decrease honesty

18 Delivery of measure: Internet/Mailing Pros: Participants may feel more anonymous; increase honesty Less chance for social desirability Efficient; less work for researcher Cons: Lower return rate (can just trash or delete) Participants may not take questions seriously Participants can not ask for clarification and may not understand directions

19 Caveats for measurement Don’t be so focused on the questions that you overlook how you will analyze the responses Think: What analyses will I be able to carry out if I use these questions? Make sure you the appropriate number and type of questions to truly test your hypotheses E.g., if your hypotheses include correlation, you have to include at least two interval scale items Consider including checks of your questions Approach the same question in two ways or include a reverse coding of the question


Download ppt "Instrumentation: Performance & Surveys 47.269: Research I Spring 2010 Dr. Leonard."

Similar presentations


Ads by Google