We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published bySarai Hardison
Modified about 1 year ago
© 2007 Prentice Hall 9-1 Chapter Nine Measurement and Scaling: Noncomparative Scaling Techniques
© 2007 Prentice Hall 9-2 Chapter Outline 1) Overview 2) Noncomparative Scaling Techniques 3) Continuous Rating Scale 4) Itemized Rating Scale i.Likert Scale ii.Semantic Differential Scale iii.Stapel Scale
© 2007 Prentice Hall 9-3 Chapter Outline 5) Noncomparative Itemized Rating Scale Decisions i.Number of Scale Categories ii.Balanced Vs. Unbalanced Scales iii.Odd or Even Number of Categories iv.Forced Vs. Non-forced Scales v.Nature and Degree of Verbal Description vi.Physical Form or Configuration 6) Multi-item Scales
© 2007 Prentice Hall 9-4 Chapter Outline 7) Scale Evaluation i.Measurement Accuracy ii.Reliability iii.Validity iv.Relationship between Reliability and Validity v.Generalizability 8) Choosing a Scaling Technique 9) Mathematically Derived Scales Reliable? Valid? Generalizable?
© 2007 Prentice Hall 9-5 Chapter Outline 10) International Marketing Research 11) Ethics in Marketing Research 12) Summary
© 2007 Prentice Hall 9-6 Noncomparative Scaling Techniques Respondents evaluate only one object at a time, and for this reason non-comparative scales are often referred to as monadic scales. Non-comparative techniques consist of continuous and itemized rating scales.
© 2007 Prentice Hall 9-7 Continuous Rating Scale Respondents rate the objects by placing a mark at the appropriate position on a line that runs from one extreme of the criterion variable to the other. The form of the continuous scale may vary considerably. How would you rate Sears as a department store? Version 1 Probably the worst I Probably the best Version 2 Probably the worst I Probably the best Version 3 Very bad Neither good Very good nor bad Probably the worst I Probably the best
© 2007 Prentice Hall 9-8. A relatively new research tool, the perception analyzer, provides continuous measurement of “gut reaction.” A group of up to 400 respondents is presented with TV or radio spots or advertising copy. The measuring device consists of a dial that contains a 100-point range. Each participant is given a dial and instructed to continuously record his or her reaction to the material being tested. As the respondents turn the dials, the information is fed to a computer, which tabulates second-by-second response profiles. As the results are recorded by the computer, they are superimposed on a video screen, enabling the researcher to view the respondents' scores immediately. The responses are also stored in a permanent data file for use in further analysis. The response scores can be broken down by categories, such as age, income, sex, or product usage. RATE: Rapid Analysis and Testing Environment
© 2007 Prentice Hall 9-9 Itemized Rating Scales The respondents are provided with a scale that has a number or brief description associated with each category. The categories are ordered in terms of scale position, and the respondents are required to select the specified category that best describes the object being rated. The commonly used itemized rating scales are the Likert, semantic differential, and Stapel scales.
© 2007 Prentice Hall 9-10 Likert Scale The Likert scale requires the respondents to indicate a degree of agreement or disagreement with each of a series of statements about the stimulus objects. Strongly Disagree Neither AgreeStrongly disagree agree nor agree disagree 1. Sears sells high quality merchandise. 12X Sears has poor in-store service. 12X I like to shop at Sears.123X45 The analysis can be conducted on an item-by-item basis (profile analysis), or a total (summated) score can be calculated. When arriving at a total score, the categories assigned to the negative statements by the respondents should be scored by reversing the scale.
© 2007 Prentice Hall 9-11 Semantic Differential Scale The semantic differential is a seven-point rating scale with end points associated with bipolar labels that have semantic meaning. SEARS IS: Powerful --:--:--:--:-X-:--:--: Weak Unreliable --:--:--:--:--:-X-:--: Reliable Modern --:--:--:--:--:--:-X-: Old-fashioned The negative adjective or phrase sometimes appears at the left side of the scale and sometimes at the right. This controls the tendency of some respondents, particularly those with very positive or very negative attitudes, to mark the right- or left-hand sides without reading the labels. Individual items on a semantic differential scale may be scored on either a -3 to +3 or a 1 to 7 scale.
© 2007 Prentice Hall 9-12 A Semantic Differential Scale for Measuring Self- Concepts, Person Concepts, and Product Concepts 1) Rugged :---:---:---:---:---:---:---: Delicate 2) Excitable :---:---:---:---:---:---:---: Calm 3) Uncomfortable :---:---:---:---:---:---:---: Comfortable 4) Dominating :---:---:---:---:---:---:---: Submissive 5) Thrifty :---:---:---:---:---:---:---: Indulgent 6) Pleasant :---:---:---:---:---:---:---: Unpleasant 7) Contemporary :---:---:---:---:---:---:---: Obsolete 8) Organized:---:---:---:---:---:---:---: Unorganized 9) Rational :---:---:---:---:---:---:---: Emotional 10) Youthful :---:---:---:---:---:---:---: Mature 11) Formal :---:---:---:---:---:---:---: Informal 12) Orthodox :---:---:---:---:---:---:---: Liberal 13) Complex :---:---:---:---:---:---:---: Simple 14) Colorless :---:---:---:---:---:---:---: Colorful 15) Modest :---:---:---:---:---:---:---: Vain
© 2007 Prentice Hall 9-13 Stapel Scale The Stapel scale is a unipolar rating scale with ten categories numbered from -5 to +5, without a neutral point (zero). This scale is usually presented vertically. SEARS X +1 HIGH QUALITY POOR SERVICE X The data obtained by using a Stapel scale can be analyzed in the same way as semantic differential data.
© 2007 Prentice Hall 9-14 Basic Noncomparative Scales
© 2007 Prentice Hall 9-15 Summary of Itemized Scale Decisions Table 9.2 1) Number of categories Although there is no single, optimal number, traditional guidelines suggest that there should be between five and nine categories 2) Balanced vs. unbalancedIn general, the scale should be balanced to obtain objective data 3) Odd/even no. of categoriesIf a neutral or indifferent scale response is possible for at least some respondents, an odd number of categories should be used 4) Forced vs. non-forcedIn situations where the respondents are expected to have no opinion, the accuracy of the data may be improved by a non-forced scale 5) Verbal descriptionAn argument can be made for labeling all or many scale categories. The category descriptions should be located as close to the response categories as possible 6) Physical formA number of options should be tried and the best selected
© 2007 Prentice Hall 9-16 Fig. 9.1 Jovan Musk for Men is:Jovan Musk for Men is: Extremely goodExtremely good Very goodVery good Good Good BadSomewhat good Very bad Bad Extremely bad Very bad Balanced and Unbalanced Scales
© 2007 Prentice Hall 9-17 Rating Scale Configurations Cheer Cheer detergent is: Cheer detergent is: 1) Very harsh Very gentle 2) Very harsh Very gentle 3). Very harsh.. Neither harsh nor gentle.. Very gentle 4) ____ ____ ____ ____ ____ ____ ____ Very Harsh Somewhat Neither harsh Somewhat Gentle Very harsh Harsh nor gentle gentle gentle 5) Very Neither harsh Very harsh nor gentle gentle Fig. 9.2
© 2007 Prentice Hall 9-18 Thermometer Scale Instructions: Please indicate how much you like McDonald’s hamburgers by coloring in the thermometer. Start at the bottom and color up to the temperature level that best indicates how strong your preference is. Form: Smiling Face Scale Instructions: Please point to the face that shows how much you like the Barbie Doll. If you do not like the Barbie Doll at all, you would point to Face 1. If you liked it very much, you would point to Face 5. Form: Fig. 9.3 Like very much Dislike very much Some Unique Rating Scale Configurations
© 2007 Prentice Hall 9-19 Some Commonly Used Scales in Marketing Table 9.3 CONSTRUCT SCALE DESCRIPTORS Attitude Importance Satisfaction Purchase Intent Purchase Freq Very Bad Not all All Important Very Dissatisfied Definitely will Not Buy Never Bad Not Important Dissatisfied Probably Will Not Buy Rarely Neither Bad Nor Good Neutral Neither Dissat Nor Satisfied Might or Might Not Buy Sometimes Good Important Satisfied Probably Will Buy Often Very Good Very Important Very Satisfied Definitely Will Buy Very Often
© 2007 Prentice Hall 9-20 Development of a Multi-item Scale Develop Theory Generate Initial Pool of Items: Theory, Secondary Data, and Qualitative Research Collect Data from a Large Pretest Sample Statistical Analysis Develop Purified Scale Collect More Data from a Different Sample Final Scale Fig. 9.4 Select a Reduced Set of Items Based on Qualitative Judgement Evaluate Scale Reliability, Validity, and Generalizability
© 2007 Prentice Hall 9-21 Scale Evaluation Fig. 9.5 DiscriminantNomologicalConvergent Test/ Retest Alternative Forms Internal Consistency Content Criterion Construct GeneralizabilityReliabilityValidity Scale Evaluation
© 2007 Prentice Hall 9-22 Measurement Accuracy The true score model provides a framework for understanding the accuracy of measurement. X O = X T + X S + X R where X O = the observed score or measurement X T = the true score of the characteristic X S = systematic error X R = random error
© 2007 Prentice Hall 9-23 Potential Sources of Error on Measurement 1 1) Other relatively stable characteristics of the individual that influence the test score, such as intelligence, social desirability, and education. 2) Short-term or transient personal factors, such as health, emotions, and fatigue. 3) Situational factors, such as the presence of other people, noise, and distractions. 4) Sampling of items included in the scale: addition, deletion, or changes in the scale items. 5) Lack of clarity of the scale, including the instructions or the items themselves. 6) Mechanical factors, such as poor printing, overcrowding items in the questionnaire, and poor design. 7) Administration of the scale, such as differences among interviewers.. 8) Analysis factors, such as differences in scoring and statistical analysis. Fig. 9.6
© 2007 Prentice Hall 9-24 Reliability Reliability can be defined as the extent to which measures are free from random error, X R. If X R = 0, the measure is perfectly reliable. In test-retest reliability, respondents are administered identical sets of scale items at two different times and the degree of similarity between the two measurements is determined. In alternative-forms reliability, two equivalent forms of the scale are constructed and the same respondents are measured at two different times, with a different form being used each time.
© 2007 Prentice Hall 9-25 Reliability Internal consistency reliability determines the extent to which different parts of a summated scale are consistent in what they indicate about the characteristic being measured. In split-half reliability, the items on the scale are divided into two halves and the resulting half scores are correlated. The coefficient alpha, or Cronbach's alpha, is the average of all possible split-half coefficients resulting from different ways of splitting the scale items. This coefficient varies from 0 to 1, and a value of 0.6 or less generally indicates unsatisfactory internal consistency reliability.
© 2007 Prentice Hall 9-26 Validity The validity of a scale may be defined as the extent to which differences in observed scale scores reflect true differences among objects on the characteristic being measured, rather than systematic or random error. Perfect validity requires that there be no measurement error (X O = X T, X R = 0, X S = 0). Content validity is a subjective but systematic evaluation of how well the content of a scale represents the measurement task at hand. Criterion validity reflects whether a scale performs as expected in relation to other variables selected (criterion variables) as meaningful criteria.
© 2007 Prentice Hall 9-27 Validity Construct validity addresses the question of what construct or characteristic the scale is, in fact, measuring. Construct validity includes convergent, discriminant, and nomological validity. Convergent validity is the extent to which the scale correlates positively with other measures of the same construct. Discriminant validity is the extent to which a measure does not correlate with other constructs from which it is supposed to differ. Nomological validity is the extent to which the scale correlates in theoretically predicted ways with measures of different but related constructs.
© 2007 Prentice Hall 9-28 Relationship Between Reliability and Validity If a measure is perfectly valid, it is also perfectly reliable. In this case X O = X T, X R = 0, and X S = 0. If a measure is unreliable, it cannot be perfectly valid, since at a minimum X O = X T + X R. Furthermore, systematic error may also be present, i.e., X S ≠0. Thus, unreliability implies invalidity. If a measure is perfectly reliable, it may or may not be perfectly valid, because systematic error may still be present (X O = X T + X S ). Reliability is a necessary, but not sufficient, condition for validity.
Measurement and Scaling. Measurement Measurement is the process observing and recording the observations that are collected as part of a research effort.
1 Psychological Practical (Year 2) PS2001 Introduction Dr. John Beech.
Experimental concepts. TheoryTheory. A theory can be defined as a "general principle proposed to explain how a number of separate facts are related." In.
Research Process, Research Design and Questionnaires.
Sampling Design & Procedure. Population The aggregate of the all the elements, sharing some common set of characteristics that comprises the universe.
Measurement Concepts Operational Definition: is the definition of a variable in terms of the actual procedures used by the researcher to measure and/or.
Business Research Methods EMBA-1 1. Business Research Methods EMBA-1 2 Measurement and Scaling Concepts & Attitude Measurement.
Research Methodology Chapter 8 & 9. Measurement of Variables Business research deal with various social, psychological and behavioral variables. Their.
QNT Overview Introductions Admin Syllabus Review material Learning teams Next assignment.
Last weeks question What is a DEVIATION IQ? In your answer, explain how the deviation IQ differs from the IQ as defined by Stern. What is the advantage.
Probability and Statistics Representation of Data Measures of Center for Data Simple Analysis of Data.
PLANNING THE AUDIT Individual audits must be properly planned to ensure: Appropriate and sufficient evidence is obtained to support the auditors opinion;
QUESTIONNAIRE DESIGN. 1. What should be asked? 2. How should questions be phrased? 3. In what sequence should the questions be arranged? 4. What questionnaire.
Beyond the Basics Jaleh Gholami MD,MPH, PhD candidate Beyond the Basics Jaleh Gholami MD,MPH, PhD candidate.
Learning Objectives 8.1 Discuss the managers role in human resource management as it regards staffing, training, and employee performance appraisal. 8.2.
Developing a Questionnaire Chapter 4. Types of Questions Open-ended –high validity, low manipulative quality Closed-ended –low validity, high manipulative.
Copyright 2007 McGraw-Hill Pty Ltd PPTs t/a Marketing Research 2e by Hair, Lukas, Bush and Ortinau Slides prepared by Judy Rex 7-1 Chapter Seven Descriptive.
A2 Sport Psychology. January 2010 D1 - Sita needs to improve her running times to ensure selection for a major competition. (a)(i) Describe one psychological.
MULTIPLE REGRESSION ANALYSIS. ENGR. DIVINO AMOR P. RIVERA STATISTICAL COORDINATION OFFICER I NSO LA UNION CONTENTS Table for the types of Multiple Regression.
Methods and Procedures Chapter 9. Distinctions between Methods and Methodology Methods - tools or techniques applied in the research process Procedures.
APPLICATION BLANKS - COMPOSITION Instructions for the applicants Questions for the applicants CONTENT 1.Personal Background 2.Educational Attainment 3.Work.
Week 4 11/23/07. Five General Item-Writing Commandments ·1. Thou shall not provide opaque directions to students regarding how to respond to your assessment.
Research Methods in Crime and Justice Chapter 10 Survey/Interview Research Methods.
Facilitating Change Through Decision Making Chapter 7.
Developing survey questions. Problems that careful questionnaire design can alleviate Consistency effect – respondents attempt to make their later answers.
Conceptualization, Operationalization, and Measurement.
International Baccalaureate The Extended Essay October 2010 WESTWOOD COLLEGIATE.
Evaluating and Institutionalizing OD Interventions.
1 Multiple-choice example. 2 Solution Since the IV is manipulated by the experimenter, its not independent of any other variable. A is false. C is also.
McGraw-Hill/IrwinCopyright © 2011 by The McGraw-Hill Companies, Inc. All Rights Reserved. fundamentals of Human Resource Management 4 th edition by R.A.
© 2016 SlidePlayer.com Inc. All rights reserved.