Presentation is loading. Please wait.

Presentation is loading. Please wait.

Things to Think About When You Want To Do a Survey What do you want to know? –How can you find out what you need to know? –Similar/same surveys Can help.

Similar presentations


Presentation on theme: "Things to Think About When You Want To Do a Survey What do you want to know? –How can you find out what you need to know? –Similar/same surveys Can help."— Presentation transcript:

1 Things to Think About When You Want To Do a Survey What do you want to know? –How can you find out what you need to know? –Similar/same surveys Can help in design Can help validate your results May obviate the need for the study –How best to reach people at JMU?

2 Things to Think About When You Want To Do a Survey Constraints –Money/time –Sample vs. population –Mailing/Bulk

3 Things NOT to do –Too Long (dining survey) –Sensitive Questions Near the Front –Double-barreled questions Measure more than one item at the same time Confounds responses and interpretation. –Ceiling and floor effects

4 Things NOT to do –Indeterminate Time (alumni survey question on volunteerism) –Response sets not matched to question or question type. –Bias Blatant Subtle

5 Things NOT to do –“Don’t knows,” and “Neutrals,” and “no opinions” that allow “weaseling out” -Lack of “Don’t knows,” and “Neutrals,” and “no opinions” where they could have important meaning. -Ceiling and floor effects in responses

6 Examples and Discussion –Study of Coffee Flavors on Visitors to a Coffee Shop 8 am 272 Questions

7 Examples and Discussion –Study of potential clients for new clothing line beginning with questions about measurements and interests in “stretchable fabrics”

8 Examples and Discussion –Do you believe that HOV lanes help reduce congestion and can help reduce pollution? Yes No

9 Examples and Discussion –Do you plan on voting for the independent candidate? Yes No

10 Examples and Discussion –How often do you use a pencil? 7 times 6 times 5times 4 times 3 times 2 times 1 time

11 Examples and Discussion –How often do you attend soccer games? Very true Partly true Not true False Entirely False

12 Examples and Discussion –Most smart people agree that the HOV lane is vital to reducing congestion. What do you think? I agree I disagree

13 Examples and Discussion –Hybrid vehicles use less gasoline, make less noise, reduce owner costs. Please check which of the following hybrid programs you support. Tax incentives for owners Special HOV permisssions during peak hours None

14 Examples and Discussion –Was your meal as good as you expected? Yes No Don’t know –Alumni survey question about diversity in workplace.

15 Examples and Discussion –How do you feel about euthanasia? Support Neutral Oppose Don’t know

16 Examples and Discussion –Was the NATO embargo of Iraq in the 1990s effective? Yes No Don’t know

17 Examples and Discussion –Please rate your dining experience. Excellent Terrible

18 Examples and Discussion Develop response categories that are mutually exclusive Problem: –From which one of these sources did you first learn about the tornado in Derby?  Radio  TV  Someone at work  While at home  While traveling to work

19 Examples and Discussion During this most recent ride, would you say that your seatbelt was fastened…  All the time, that is, every minute the car was moving  Almost all the time  Most of the time  About half the time  Less than half the time  Not all during the time the vehicle was moving

20 Examples and Discussion Use cognitive design techniques to improve recall Problem: –We would like to ask about the most recent time you drove or rode anywhere in an automobile or other vehicle such as a pickup or van. During this most recent ride, would you say that your seatbelt was fastened…  All the time  Almost all the time  Most of the time  About half the time  Less than half the time  Not at all

21 Measurement in Marketing Research

22 Basic Question-Response Formats Open-ended Close-ended Scaled-response

23 Basic Question-Response Formats Open-Ended Unprobed Open-ended question: presents no response options to the respondent Unprobed format: seeks no additional information Advantages: Respondent frame of reference; allows respondent to use his or her own words Disadvantages: Difficult to code and interpret Respondents may not give complete answers

24 Basic Question-Response Formats Open-Ended Probed Open-ended question: presents no response options to the respondent Probed format: includes a response probe instructing the interviewer to ask for additional information or answer clarification Advantages: Elicits more-complete answers Respondent frame of reference Disadvantages: Difficult to code, analyze, and interpret

25 Basic Question-Response Formats Close-Ended Dichotomous Close-ended question: provides a set of answers from amongst which the respondent must choose Dichotomous: has only two response options, such as “yes” - “no”; “have” – “have not”; “male” – “female” Advantages: Simple to administer, code, analyze Disadvantages: May oversimplify response options May be in researcher frame of reference

26 Basic Question-Response Formats Close-Ended Multiple Category Close-ended question: provides a set of answers from amongst which the respondent must choose Multiple response: has more than two answer choices; must have “mutually exclusive” and “collectively exhaustive” answer set Advantages: Allows for broad range of possible responses Simple to administer, code, and analyze Disadvantages: May be in researcher frame of reference May not have all appropriate respondent answer options

27 Basic Question-Response Formats Scaled-Response Unlabeled Scaled-response question: uses a scale (parts:a statement, instructions, a response format) to measure respondent feeling, judgment, perception … Unlabeled: uses a scale that may be purely numerical (no words/phrases) or only the endpoints of the scale are identified Advantages: Allows for degree of intensity to be expressed without researcher options Simple to administer, code, and analyze Disadvantage: Scale may not reflect respondents’ view

28 Basic Question-Response Formats Scaled-Response Labeled Scaled-response question: uses a scale (parts: a statement, instructions, a response format) to measure respondent feeling, judgment, perception … Labeled: a scale where all choices/positions are identified with some descriptive word/phrase Advantages: Allows for degree of intensity to be more clearly expressed Simple to administer, code, and analyze More consistency in responses Disadvantage: Scale choices more limited or too detailed

29 Considerations in Choosing a Question- Response Format The nature of the property being measured Gender=dichotomous; liking for chocolate=scale Previous research studies Use format of previous study if comparing The data collection mode Cannot use some scales on the phone The ability of the respondent Kids can only relate to certain types of visual scales The level of analysis points to scale type needed

30 Basic Concepts in Measurement Measurement: determining how much of a property is possessed; numbers or labels are then assigned to reflect the measure Properties: specific features or characteristics of objects, persons, or events that can be used to distinguish them from others Objective properties are physically observable or verifiable Subjective properties are mental constructs

31 Scale Characteristics Determine the Level of Measurement (Level of Data) Nominal data: The use of a descriptor, name, or label, to stand for each “unit” on the scale: “yes” “no”, “male” “female”,etc. Ordinal data: Objects, persons, events are placed in rank order on some characteristic in a specific direction. Zero and distance have no meaning; rank 1, 2, 3, etc. Interval data: Units of distance have meaning. There is an arbitrary zero point. Examples are temperatures in degrees Fahrenheit or Celsius; map distance Chicago to ? Ratio data: Multiples have meaning. There is an absolute or natural zero point. Examples: the Kelvin temperature scale, sales/costs in $, market share in %

32 Primary Scales of Measurement 738 Scale Nominal Numbers Assigned to Runners Ordinal Rank Order of Winners Interval Performance Rating on a 0 to 10 Scale Ratio Time to Finish, in Seconds Third place Second place First place Finish

33 Primary Scales of Measurement Table 8.1

34 A Classification of Scaling Techniques Figure 8.2 Likert Semantic Differential Stapel Scaling Techniques Noncomparative Scales Comparative Scales Paired Comparison Rank Order Constant Sum Q-Sort and Other Procedures Continuous Rating Scales Itemized Rating Scales

35 A Comparison of Scaling Techniques Comparative scales involve the direct comparison of stimulus objects. Comparative scale data must be interpreted in relative terms and have only ordinal or rank order properties. In noncomparative scales, each object is scaled independently of the others in the stimulus set. The resulting data are generally assumed to be interval or ratio scaled.

36 Relative Advantages of Comparative Scales Small differences between stimulus objects can be detected. Same known reference points for all respondents. Easily understood and can be applied. Involve fewer theoretical assumptions. Tend to reduce halo or carryover effects from one judgment to another.

37 Relative Disadvantages of Comparative Scales Ordinal nature of the data Inability to generalize beyond the stimulus objects scaled.

38 Comparative Scaling Techniques Paired Comparison Scaling A respondent is presented with two objects and asked to select one according to some criterion. The data obtained are ordinal in nature. Paired comparison scaling is the most widely-used comparative scaling technique. With n brands, [n(n - 1) /2] paired comparisons are required. Under the assumption of transitivity, it is possible to convert paired comparison data to a rank order.

39 Obtaining Shampoo Preferences Using Paired Comparisons Instructions: We are going to present you with ten pairs of shampoo brands. For each pair, please indicate which one of the two brands of shampoo you would prefer for personal use. Recording Form: a A 1 in a particular box means that the brand in that column was preferred over the brand in the corresponding row. A 0 means that the row brand was preferred over the column brand. b The number of times a brand was preferred is obtained by summing the 1s in each column.

40 Preference for Toothpaste Brands Using Rank Order Scaling Brand Rank Order 1. Crest _________ 2. Colgate _________ 3. Aim _________ 4. Gleem _________ 5. Sensodyne _________ 6. Ultra Brite _________ 7. Close Up _________ 8. Pepsodent _________ 9. Plus White _________ 10. Stripe _________ Form

41 Importance of Bathing Soap Attributes Using a Constant Sum Scale Instructions On the next slide, there are eight attributes of bathing soaps. Please allocate 100 points among the attributes so that your allocation reflects the relative importance you attach to each attribute. The more points an attribute receives, the more important the attribute is. If an attribute is not at all important, assign it zero points. If an attribute is twice as important as some other attribute, it should receive twice as many points.

42 Fig. 8.5 cont. Form Average Responses of Three Segments Attribute Segment I Segment II Segment III 1. Mildness 2. Lather 3. Shrinkage 4. Price 5. Fragrance 6. Packaging 7. Moisturizing 8. Cleaning Power Sum Importance of Bathing Soap Attributes Using a Constant Sum Scale

43 Noncomparative Scaling Techniques Respondents evaluate only one object at a time, and for this reason non-comparative scales are often referred to as monadic scales. Non-comparative techniques consist of continuous and itemized rating scales.

44 Continuous Rating Scale Respondents rate the objects by placing a mark at the appropriate position on a line that runs from one extreme of the criterion variable to the other. The form of the continuous scale may vary considerably. How would you rate Sears as a department store? Version 1 Probably the worst I Probably the best Version 2 Probably the worst I Probably the best Version 3 Very bad Neither good Very good nor bad Probably the worst I Probably the best

45 Itemized Rating Scales The respondents are provided with a scale that has a number or brief description associated with each category. The categories are ordered in terms of scale position, and the respondents are required to select the specified category that best describes the object being rated. The commonly used itemized rating scales are the Likert, semantic differential, and Stapel scales.

46 Likert Scale The Likert scale requires the respondents to indicate a degree of agreement or disagreement with each of a series of statements about the stimulus objects. Strongly Disagree Neither AgreeStrongly disagree agree nor agree disagree 1. Sears sells high quality merchandise. 12X Sears has poor in-store service. 12X I like to shop at Sears.123X45 The analysis can be conducted on an item-by-item basis (profile analysis), or a total (summated) score can be calculated. When arriving at a total score, the categories assigned to the negative statements by the respondents should be scored by reversing the scale.

47 Semantic Differential Scale The semantic differential is a seven-point rating scale with end points associated with bipolar labels that have semantic meaning. SEARS IS: Powerful --:--:--:--:-X-:--:--: Weak Unreliable --:--:--:--:--:-X-:--: Reliable Modern --:--:--:--:--:--:-X-: Old-fashioned The negative adjective or phrase sometimes appears at the left side of the scale and sometimes at the right. This controls the tendency of some respondents, particularly those with very positive or very negative attitudes, to mark the right- or left- hand sides without reading the labels. Individual items on a semantic differential scale may be scored on either a -3 to +3 or a 1 to 7 scale.

48 A Semantic Differential Scale for Measuring Self- Concepts, Person Concepts, and Product Concepts 1) Rugged :---:---:---:---:---:---:---: Delicate 2) Excitable :---:---:---:---:---:---:---: Calm 3) Uncomfortable :---:---:---:---:---:---:---: Comfortable 4) Dominating :---:---:---:---:---:---:---: Submissive 5) Thrifty :---:---:---:---:---:---:---: Indulgent 6) Pleasant :---:---:---:---:---:---:---: Unpleasant 7) Contemporary :---:---:---:---:---:---:---: Obsolete 8) Organized:---:---:---:---:---:---:---: Unorganized 9) Rational :---:---:---:---:---:---:---: Emotional 10) Youthful :---:---:---:---:---:---:---: Mature 11) Formal :---:---:---:---:---:---:---: Informal 12) Orthodox :---:---:---:---:---:---:---: Liberal 13) Complex :---:---:---:---:---:---:---: Simple 14) Colorless :---:---:---:---:---:---:---: Colorful 15) Modest :---:---:---:---:---:---:---: Vain

49 Stapel Scale The Stapel scale is a unipolar rating scale with ten categories numbered from -5 to +5, without a neutral point (zero). This scale is usually presented vertically. SEARS X +1 HIGH QUALITY POOR SERVICE X The data obtained by using a Stapel scale can be analyzed in the same way as semantic differential data.

50 Basic Noncomparative Scales

51 Summary of Itemized Scale Decisions Table 9.2 1) Number of categories Although there is no single, optimal number, traditional guidelines suggest that there should be between five and nine categories 2) Balanced vs. unbalancedIn general, the scale should be balanced to obtain objective data 3) Odd/even no. of categoriesIf a neutral or indifferent scale response is possible for at least some respondents, an odd number of categories should be used 4) Forced vs. non-forcedIn situations where the respondents are expected to have no opinion, the accuracy of the data may be improved by a non-forced scale 5) Verbal descriptionAn argument can be made for labeling all or many scale categories. The category descriptions should be located as close to the response categories as possible 6) Physical formA number of options should be tried and the best selected

52 Jovan Musk for Men is:Jovan Musk for Men is: Extremely goodExtremely good Very goodVery good Good Good BadSomewhat good Very bad Bad Extremely bad Very bad Balanced and Unbalanced Scales

53 Rating Scale Configurations Cheer Cheer detergent is: Cheer detergent is: 1) Very harsh Very gentle 2) Very harsh Very gentle 3). Very harsh.. Neither harsh nor gentle.. Very gentle 4) ____ ____ ____ ____ ____ ____ ____ Very Harsh Somewhat Neither harsh Somewhat Gentle Very harsh Harsh nor gentle gentle gentle 5) Very Neither harsh Very harsh nor gentle gentle Fig. 9.2

54 Thermometer Scale Instructions: Please indicate how much you like McDonald’s hamburgers by coloring in the thermometer. Start at the bottom and color up to the temperature level that best indicates how strong your preference is. Form: Smiling Face Scale Instructions: Please point to the face that shows how much you like the Barbie Doll. If you do not like the Barbie Doll at all, you would point to Face 1. If you liked it very much, you would point to Face 5. Form: Fig. 9.3 Like very much Dislike very much Some Unique Rating Scale Configurations

55 Some Commonly Used Scales in Marketing Table 9.3 CONSTRUCT SCALE DESCRIPTORS Attitude Importance Satisfaction Purchase Intent Purchase Freq Very Bad Not all All Important Very Dissatisfied Definitely will Not Buy Never Bad Not Important Dissatisfied Probably Will Not Buy Rarely Neither Bad Nor Good Neutral Neither Dissat Nor Satisfied Might or Might Not Buy Sometimes Good Important Satisfied Probably Will Buy Often Very Good Very Important Very Satisfied Definitely Will Buy Very Often

56 Scale Evaluation Fig. 9.5 DiscriminantNomologicalConvergent Test/ Retest Alternative Forms Internal Consistency Content Criterion Construct GeneralizabilityReliabilityValidity Scale Evaluation

57 Potential Sources of Error on Measurement 1 1) Other relatively stable characteristics of the individual that influence the test score, such as intelligence, social desirability, and education. 2) Short-term or transient personal factors, such as health, emotions, and fatigue. 3) Situational factors, such as the presence of other people, noise, and distractions. 4) Sampling of items included in the scale: addition, deletion, or changes in the scale items. 5) Lack of clarity of the scale, including the instructions or the items themselves. 6) Mechanical factors, such as poor printing, overcrowding items in the questionnaire, and poor design. 7) Administration of the scale, such as differences among interviewers.. 8) Analysis factors, such as differences in scoring and statistical analysis.

58 Reliability Reliability can be defined as the extent to which measures are free from random error, X R. If X R = 0, the measure is perfectly reliable. In test-retest reliability, respondents are administered identical sets of scale items at two different times and the degree of similarity between the two measurements is determined. In alternative-forms reliability, two equivalent forms of the scale are constructed and the same respondents are measured at two different times, with a different form being used each time.

59 Reliability Internal consistency reliability determines the extent to which different parts of a summated scale are consistent in what they indicate about the characteristic being measured. In split-half reliability, the items on the scale are divided into two halves and the resulting half scores are correlated. The coefficient alpha, or Cronbach's alpha, is the average of all possible split-half coefficients resulting from different ways of splitting the scale items. This coefficient varies from 0 to 1, and a value of 0.6 or less generally indicates unsatisfactory internal consistency reliability.

60 Validity The validity of a scale may be defined as the extent to which differences in observed scale scores reflect true differences among objects on the characteristic being measured, rather than systematic or random error. Perfect validity requires that there be no measurement error (X O = X T, X R = 0, X S = 0). Content validity is a subjective but systematic evaluation of how well the content of a scale represents the measurement task at hand. Criterion validity reflects whether a scale performs as expected in relation to other variables selected (criterion variables) as meaningful criteria.

61 Validity Construct validity addresses the question of what construct or characteristic the scale is, in fact, measuring. Construct validity includes convergent, discriminant, and nomological validity. Convergent validity is the extent to which the scale correlates positively with other measures of the same construct. Discriminant validity is the extent to which a measure does not correlate with other constructs from which it is supposed to differ. Nomological validity is the extent to which the scale correlates in theoretically predicted ways with measures of different but related constructs.

62

63 Data Collection in the Field, Response Error, and Questionnaire Screening

64 Nonsampling Error in Marketing Research Nonsampling (administrative) error includes All types of nonresponse error Data gathering errors Data handling errors Data analysis errors Interpretation errors

65 Possible Errors in Field Data Collection Field worker error: errors committed by the persons who administer the questionnaires Respondent error: errors committed on the part of the respondent

66 Nonsampling Errors Associated With Fieldwork

67 Possible Errors in Field Data Collection Field-Worker Errors Intentional Intentional field worker error: errors committed when a fieldworker willfully violates the data collection requirements set forth by the researcher Interviewer cheating: occurs when the interviewer intentionally misrepresents respondents. May be caused by unrealistic workload and/or poor questionnaire Leading respondents: occurs when interviewer influences respondent’s answers through wording, voice inflection, or body language

68 Possible Errors in Field Data Collection Field-Worker Errors Unintentional Unintentional field worker error: errors committed when an interviewer believes he or she is performing correctly Interviewer personal characteristics: occurs because of the interviewer’s personal characteristics such as accent, sex, and demeanor Interviewer misunderstanding: occurs when the interviewer believes he or she knows how to administer a survey but instead does it incorrectly Fatigue-related mistakes: occur when interviewer becomes tired

69 Possible Errors in Field Data Collection Respondent Errors Intentional Intentional respondent error: errors committed when there are respondents that willfully misrepresent themselves in surveys Falsehoods: occur when respondents fail to tell the truth in surveys Nonresponse: occurs when the prospective respondent fails 1) to take part in a survey or 2) to answer specific survey questions Refusals (respondent does not answer any questions) vs. Termination (respondent answers at least one question then stops)

70 Possible Errors in Field Data Collection Respondent Errors Intentional Refusals typically result from the topic of the study or potential respondent lack of time, energy or desire to participate Terminations result from a poorly designed questionnaire, questionnaire length, lack of time or energy, and/or external interruption

71 Possible Errors in Field Data Collection Respondent Errors Unintentional Unintentional respondent error: errors committed when a respondent gives a response that is not valid but that he or she believes is the truth

72 Possible Errors in Field Data Collection Respondent Errors Unintentional…cont. Respondent misunderstanding: occurs when a respondent gives an answer without comprehending the question and/or the accompanying instructions Guessing: occurs when a respondent gives an answer when he or she is uncertain of its accuracy Attention loss: occurs when a respondent’s interest in the survey wanes Distractions: (such as interruptions) may occur while questionnaire administration takes place Fatigue: occurs when a respondent becomes tired of participating in a survey

73 How to Control Data Collection Errors Types of ErrorsControl Mechanisms Intentional Field Worker Errors CheatingGood questionnaire, Reasonable work expectation, Supervision, Random checks Leading respondentValidation Unintentional Field Worker Errors Interviewer CharacteristicsSelection and training of interviewers MisunderstandingsOrientation sessions and role playing FatigueRequire breaks and alternate surveys } {

74 How to Control Data Collection Errors…cont. Types of ErrorsControl Mechanisms Intentional Respondent Errors Assuring anonymity and confidentiality FalsehoodsIncentives Validation checks Third person technique Assuring anonymity and confidentiality NonresponseIncentives Third person technique { {

75 How to Control Data Collection Errors…cont. Types of ErrorsControl Mechanisms Unintentional Respondent Errors Well-drafted questionnaire MisunderstandingsDirect Questions: Do you understand? Well-drafted questionnaire GuessingResponse options (e.g., “unsure”) Attention lossReversal of scale endpoints Distractions FatiguePrompters { { { }

76 Data Collection Errors with Online Surveys Multiple submissions by the same respondent (not able to identify such situations) Bogus respondents and/or responses (“fictitious person,” disguises or misrepresents self) Misrepresentation of the population (over- representing or under-representing segments with/without online access and use)

77 Nonresponse Error Nonresponse: failure on the part of a prospective respondent to take part in a survey or to answer specific questions on the survey Refusals to participate in survey Break-offs (terminations) during the interview Refusals to answer certain questions (item omissions) Completed interview must be defined (acceptable levels of non-answered questions and types).

78 Nonresponse Error…cont. Response rate: enumerates the percentage of the total sample with which the interviews were completed Refusals to participate in survey Break-offs (terminations) during the interview Refusals to answer certain questions (item omissions)

79 Nonresponse Error…cont. CASRO response rate formula (not mathematically correct):

80 Reducing Nonresponse Error Mail surveys: Advance notification Monetary incentives Follow-up mailings Telephone surveys: Callback attempts

81 Preliminary Questionnaire Screening Unsystematic (flip through questionnaire stack and look at some) and systematic (random or systematic sampling procedure to select) checks of completed questionnaires What to look for in questionnaire inspection Incomplete questionnaires? Nonresponses to specific questions? Yea- or nay-saying patterns (use scale extremes only)? Middle-of-the-road patterns (neutrals on all) ?

82 Unreliable Responses Unreliable responses are found when conducting questionnaire screening, and an inconsistent or unreliable respondent may need to be eliminated from the sample.


Download ppt "Things to Think About When You Want To Do a Survey What do you want to know? –How can you find out what you need to know? –Similar/same surveys Can help."

Similar presentations


Ads by Google