Presentation is loading. Please wait.

Presentation is loading. Please wait.

SURVEY CONSTRUCTION, VALIDITY, RELIABILITY What to think about when creating a survey instrument.

Similar presentations


Presentation on theme: "SURVEY CONSTRUCTION, VALIDITY, RELIABILITY What to think about when creating a survey instrument."— Presentation transcript:

1 SURVEY CONSTRUCTION, VALIDITY, RELIABILITY What to think about when creating a survey instrument.

2 TOPICS What is a survey? What are the steps in a survey study? How do I construct questions? Validity Reliability

3 WHAT IS A SURVEY? WHAT ARE THE STEPS IN A SURVEY STUDY? A survey is a system for collecting information from or about people to describe, compare, or explain their knowledge, attitudes, and behavior. (Fink, 2003) Steps: 1. Define the objectives/goals* 2. Design the study: population*, sample*, sample size*, timeline*, instrument construction 3. Integrate validity and reliability into survey instrument development (including pilot testing) A. Review, test, revise B. Repeat as necessary 4. After IRB approval: Administer the survey (internet, mail, mixed mode)* 5. Data cleaning & management* 6. Data analysis* 7. Reporting results* *Not today

4 ADVANTAGES AND DISADVANTAGES OF SURVEYS Advantages Anonymous Inexpensive Easy to compare and Analyze Lots of people, lots of data Use pre-existing instruments Disadvantages Wording can bias response Impersonal Doesnt get full story Low response rates (Consider the Tailored Design Method by Don Dillman) Self-selection bias Not generalizable

5 SOCIAL EXCHANGE Establish Trust Token of appreciation Sponsorship of legitimate authority Make task appear important Invoke other exchange relationships Increase Rewards Reduce Social Costs Avoid: subordinate language, embarrassment, inconvenience Minimize requests for personal informaiton

6 WRITING QUESTIONS Match survey questions to research objectives, goals, research questions, or hypotheses!!! Straight forward questions yield straight forward responses. Avoid questions about which you are just curious Include necessary demographics whenever possible (describe sample) If you can use a previously developed survey instrument, do it! Validity and Reliability already done Get author permission, some cost money Make sure it is accepted in the literature Do NOT change it! (without the original authors permission, but then you lose the previous validity and reliability)

7 QUESTION DESIGN Physical format/appearance Qualtrics is nice Visual layout is clean Order of questions Questions about announced subject first! Order should be logical Group items that are similar Personal and demographic questions at the end

8 CAUTIONS You get what you measure: Choose your questions carefully Keep it simple: Be aware of the literacy level of your respondents Shorter is better: Concise, easier to understand, easier to answer Ranking is hard: And often misunderstood leading to invalid data People dont often read the instructions Each concept gets its own question! (Beware of AND and OR) Questions should be concrete and behavioral, not conceptual Binary questions (yes/no) contain less information than ordinal questions (Hierarchy of information content) Define terms before asking questions Avoid jargon and loaded words

9 QUESTION TYPES Open vs. Closed Levels of Measurement (Information) Nominal (Sex, race/ethnicity) Ordinal (Likert, Likert-type, unipolar ordinal) Interval Ratio

10 ORDINAL QUESTIONS Ordinal Unipolar e.g. never, rarely, sometimes, often, always 4 to 5 points Bipolar Likert and Likert-type strongly disagree, disagree, somewhat disagree, neither agree nor disagree, somewhat agree, agree, strongly agree Even vs. odd number of responses Left-side bias Continuous as ordinal Down-coding age and income

11 VALIDITY Face Validity Content Validity Criterion Related Validity Concurrent (correlated with a gold standard) Predictive (ability to forecast) Construct Validity Convergent Divergent

12 FACE VALIDITY Generally untrained judges Does the instrument appear to measure the construct? Very weak evidence

13 CONTENT VALIDITY Definition: The extent to which an instrument adequately samples the research domain of interest when attempting to measure phenomena (Carmines and Zeller, 1979) Begin with literature review to identify the entire domain of content Develop items associated with the identified domain of content Content Validity Index (Proportion of experts who agree item is relevant), Cohens Kappa In mixed mode surveys, triangulate quantitative responses with opnen-ended responses

14 CRITERION RELATED VALIDITY Compares new instrument against another instrument or predictor Concurrent validity Gold Standard Known/published psychometric properties Published and used by researchers in the field Could choose test that measures opposite Correlation coefficient Predictive validity Useful in predicting future behavior/events/attitudes/outcomes Correlation coefficient

15 CONSTRUCT VALIDITY Theoretical relationship of new instrument with other constructs or behaviors New instrument correlates with other similar (not exact) measures (convergent validity) and does not correlate with others (divergent validity New instrument discriminates one group from another Not always quantifiable Requires theory related to constructs

16 RELIABILITY Type of Reliability*Characteristics Test-RetestConsistency across timeSame respondents, short time lapse, group of respondents Alternate FormDifferently worded items to obtain same information Like different forms of SAT, requires two entire forms Internal ConsistencyGroups of items measuring same construct Cronbachs Alpha IntraobserverSame observer measures twice InterobserverMore than one observer measures same experimental unit

17 CRONBACHS ALPHA Internal Consistency Only appropriate when scale scores are developed Only use items that are thought to measure a single construct Prefer scales to have Cronbachs alphas greater than.7 If Cronbachs alpha is greater than.9, are some items redundant? Not generally appropriate for knowledge tests (unless they measure a single construct) Not appropriate when scale scores are not calculated

18 SCALING AND SCORING Hypothesize which items form a scale Perform a factor analysis Eliminate items that crossload, fail to discriminate among respondents, strongly correlate with other items, are frequently misunderstood or left blank Calculate Cronbachs alpha, and alpha if item deleted Interpretability is important! What does a big number mean, what does a small number mean? When you read the items do they make sense for a single construct

19 OTHER CONSIDERATIONS Sample size requirements increase with: Decreasing levels of information Smaller effect size detection Software for sample size and power analysis Pilot testing is critical Helps identify errors in the survey Identifies potential design/redesign issues Predicts potential problems you might encounter

20 END NOTES Consider applying for CTR-IN pilot grants! http://www.isu.edu/healthsciences/ichr/ Teri Peterson Best way to contact is through email: peteteri@isu.edupeteteri@isu.edu Phone is less reliable (X5333 or X4861)


Download ppt "SURVEY CONSTRUCTION, VALIDITY, RELIABILITY What to think about when creating a survey instrument."

Similar presentations


Ads by Google