Presentation on theme: "Professor Gary Merlo Westfield State College"— Presentation transcript:
1 Professor Gary Merlo Westfield State College CHAPTER13 MEASUREMENTProfessor Gary MerloWestfield State College
2 WHAT IS TO BE MEASURED?This is not as simple a question as it may at first seem.A precise definition of the concept may require a description of how it will be measured.Frequently there is more than one way of measuring a concept.Further, true measurement of concepts requires a process of precisely assigning scores or numbers to the attributes of people or objects.To have precise measurement in marketing research requires a careful conceptual definition, an operational definition, and a system of consistent rules for assigning numbers or scales.
3 ConceptA generalized idea about a class of objects, attributes, occurrences, or processes
4 Concepts:Before the measurement process can occur, the researcher has to identify and define the concepts relevant to the problem.A concept (or construct) is a generalized idea about a class of objects, attributes, occurrences, or processes.Concepts such as brand loyalty, personality, and so on present great problems in terms of definition and measurement.
5 Operational Definition Specifies what the researcher must do to measure the concept under investigation
6 Operational Definitions: Concepts must be made operational in order to be measured.An operational definition gives meaning to a concept by specifying the activities or operations necessary to measure it.It specifies what the investigator must do to measure the concept under investigation. An operational definition tells the investigator to "do such-and-such in so-and-so manner."
8 RULES OF MEASUREMENT A rule is a guide instructing us what to do. An example of a measurement rule might be "assign the numerals 1 through 7 to individuals according to how brand loyal they are.If the individual is an extremely brand loyal individual, assign a 1. If the individual is a total brand switcher with no brand loyalty, assign a 7.”Operational definitions help the researcher specify the rules for assigning numbers.
9 ScaleSeries of items arranged according to value for the purpose of quanitificationA continuous spectrum
10 SCALESA scale may be defined as any series of items which are progressively arranged according to value or magnitude into which an item can be placed according to its quantification.A scale is a continuous spectrum or series of categories.The purpose of scaling is to represent, usually quantitatively, an item's, a person's, or an event's place in the spectrum.
11 Four Types of Scales Nominal Scale: The simplest type of scale. Assignment of numbers or letters to objectsServe as labels for identification or classification
12 Four Types of Scales (Continued) Ordinal Scale:This scale arranges objects or alternatives according to their magnitude.
13 Four Types of Scales (Continued) Interval Scale:Not only is the order of the finish known, but the distance between the horses is known.Interval scales not only indicate order, they measure order (or distance) in units of equal intervals.The location of the zero point is arbitrary.The classic example of an interval scale is the Fahrenheit temperature scale.If the temperature is 80 degrees, it cannot be said that it is twice as hot as a 40 degree temperature.
14 Four Types of Scales (Continued) Ratio Scale:Ratio scales have absolute rather than relative scales.For example, both money and weight are ratio scales because they possess an absolute zero and interval Properties.The absolute zero represents a point on the scale where there is an absence of the given attribute.However, for most behavioral marketing research, interval scales are typically the best measurements.
16 Index MeasuresThis chapter thus far focused on measuring a concept with a single question or a single observation.However, measuring more complex concepts may require more than one question because the concept has several attributes.An attribute is a single characteristic or fundamental feature pertaining to an object, person, situation, or issue.Multi-item instruments for measuring a single concept with several attributes are called index measures, or composite measures.For example, index of social class may be based on three weighted averages:residenceoccupationAsking different questions in order to measure the same concept provides a more accurate cumulative measure than does a single-item measure.
17 THREE CRITERIA FOR GOOD MEASUREMENT Reliability:Applies to a measure when similar results are obtained over time and across situations.It is the degree to which measures are free from random error and therefore yield consistent results.There are two dimensions of reliabilityrepeatabilityinternal consistency
18 Assessing the Repeatability of a Measure First aspect of reliabilityThe test-retest method involves administering the same scale or measurement to the same respondents at two separate points in time to test for stability.If the measure is stable over time, the reported test administered under similar conditions should obtain similar results.High stability correlation, or consistency between the two measures at time I and time 2, indicates a high degree of reliability.There are two problems with measures of test-retest reliabilityFirst, the first measure may sensitize the respondents to their participation in a research project and, subsequently, influence the results of the second measure.Second, if the duration of the time period between measures is long, there may be attitude change, or some other form of maturation, of the subjects, which will affect the responses.
19 Assessing the Repeatability of a Measure (Continued) The Second underlying dimension of reliabilityConcerns the homogeneity of the measureAn attempt to measure an attitude may require asking several questions or a battery of scale items.To measure the internal consistency of a multiple-item measure, scores on subsets of items within the scale must be correlatedThe split-half method, when a researcher checks the results of one half of the scale items to the other half, is the most basic method for checking internal consistency.The equivalent-form method is utilized when two alternative instruments are designed to be as equivalent as possible.If there is a high correlation between the two scales the researcher can conclude that the scales are reliable. However, if there is a low correspondence, the researcher will be unsure whether the measure has intrinsically low reliability, or whether the particular equivalent form has failed to be similar to the other form.
20 ValidityThe purpose of measurement is to measure what we intend to measure.For example, in measuring intention to buy, there could be a systematic bias to identify brands "I wish I could afford” rather than the brand usually purchased.Validity addresses the problem of whether or not a measure does indeed measure what it purports to measure; if it does not, there will be problems,
21 Attempts to Provide Evidence of a Measure’s Degree of Validity Face or content validity:This refers to the subjective agreement of professionals that a scale logically appears to be accurately reflecting what it purports to measure.Criterion validity:Criterion validity is an attempt by researchers to answer the question"Does my measure correlate with other measures of the same constructs"Consider the physical concept of length. If a new measure of length were developed, finding that the new measure correlated with other measures of length would provide some assurance that the measure was valid.Criterion validity may be classified as either concurrent validity (when the measure is taken at the same time as the criterion measure) orPredictive validity (when the measure predicts a future event)Construct validity: Construct validity is established by the degree to which the measure confirms a network of related hypotheses generated from a theory based on the concept. In its simplest form, if the measure behaves the way it is supposed to in a pattern of intercorrelation with a variety of other variables, then there is evidence for construct validity. This is a complex method of establishing validity and is of less concern to the applied researcher than to the basic researcher.
22 Reliability and Validity on Target Old Rifle New Rifle New Rifle SunglareLow Reliability High Reliability Reliable but Not Valid(Target A) (Target B) (Target C)
25 Sensitivity:The sensitivity of a scale is important, particularly when changes in attitude, or other hypothetical constructs, are under investigation.Sensitivity refers to the ability of a instrument to accurately measure variability in stimuli or responsesThe sensitivity of a scale which is based on a single question or a single item can be increased by adding additional questions or items.In other words, because index measures allow for a greater range of possible scores, they are more sensitive than single-item scales.