Presentation is loading. Please wait.

Presentation is loading. Please wait.

13 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Similar presentations


Presentation on theme: "13 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part."— Presentation transcript:

1 13 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Measurement and Scaling Concepts Business Research Methods 9e Zikmund Babin Carr Griffin Chapter 13 Measurement and Scaling Concepts

2 LEARNING OUTCOMES LEARNING OUTCOMES ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 1. Determine what needs to be measured to address a research question or hypothesis 2. Distinguish levels of scale measurement 3. Know how to form an index or composite measure 4. List the three criteria for good measurement 5. Perform a basic assessment of scale reliability and validity 13-2

3 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 13–3 What Do I Measure? Measurement The process of describing some property of a phenomenon, usually by assigning numbers in a reliable and valid way. ◗ i.e., measurement can be illustrated by thinking about the way instructors assign students’ grades. Concept A researcher has to know what to measure before knowing how to measure something. A generalized idea about a class of objects, attributes, occurrences, or processes ◗ e.g.., age, education, customer satisfaction.

4 LEARNING OUTCOMES LEARNING OUTCOMES ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. EXHIBIT 13.1 Are There Any Validity Issues with This Measurement? 13–4 All measurement, mainly in the social sciences, contains error. i.e. Instructors may use a percentage scale all semester long and then be required to assign a letter grade for a student's overall performance. Does this produce any measurement problem?

5 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 13–5 Operational Definitions Operationalization Researchers measure concepts through a process known operationalization. This process involves identifying scales that correspond to variance in the concept. The process of identifying scales that correspond (match) to variance in a concept involved in a research process. Scales A device providing a range of values that correspond to different characteristics or amounts of a characteristic exhibited in observing a concept. ◗ In other words, scale provide correspondence rules that indicate that a certain value on a scale corresponds to some true value of concept.

6 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 13–6 Operational Definitions Correspondence rules Indicate the way that a certain value on a scale corresponds to some true value of a concept. ◗ E.g., assign the numbers 1 through 7 according to how much trust that you have in your sales representative?  “If the sales rep. is perceived as completely untrustworthy, assign numeral 1;  If the sales rep. trustworthy, assign a 7”. Constructs Sometimes, a single variable cannot capture a concept alone. Using multiple variables to measure one concept can often provide a more complete account of some concept than could any single variable Concepts measured with multiple variables. - e.g., loyalty: has been measureed by a combination of customer share & commitment.

7 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 13–7 Levels of Scale Measurement Four level or type of scale measurement: Nominal Assigns a value to an object for identification or classification purposes. Most elementary level of measurement. Ordinal Ranking scales allowing things to be arranged based on how much of some concept they possible. ◗ E.g., Research participants often are asked to rank things based on preferences. Have nominal properties. ◗ E.g., the ordinal scale does not tell how far apart the houses were, but is good enough to let someone know the result of race.

8 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 13–8 Levels of Scale Measurement (cont’d) Interval Capture information about differences in quantities of a concept. Have both nominal and ordinal properties. Interval scales do capture information about differences in quantities of a concept ◗ i.e., if the professor assign grades to term papers using a numbering system ranging from 1-20, not only does the scale represent the fact that a student with 16 outperformed a student with 12, but the scale would show by how much (4)

9 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 13–9 Levels of Scale Measurement (cont’d) Ratio scales Represent the highest form of measurement in that they have all the properties of interval scales with the additional attribute of representing absolute quantities; characterized by meaningful absolute zero. Ratio scales represent the absolute meaning of the numbers on the scale. ◗ E.g. if we know that house 7 took 1 minute 59 2/5 seconds to finish the race, and we know the time it took for all the other horse, we can determine the time between horse 7, 6, and 5 ◗ In other words, if we know the ratio information regarding the performance of each horse- the time to complete the race- we could determine the interval level information and the ordinal level information.

10 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. EXHIBIT 13.4 Nominal, Ordinal, Interval, and Ratio Scales Provide Different Information 13–10

11 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 13–11 Although you can put numbers into formulas & perform calculations with almost any numbers, the researcher has to know the meaning behind the numbers before meaningful conclusions can be drawn. Discrete Measures Measures that can take on only one of a finite number of values. ◗ I.e.., common discrete scale include yes-or-no response, color choices, or any scale that involves selecting from among a small number or categories. ◗ Nominal and ordinal scales are discrete measures. ◗ The central tendency of discrete measures is best captures by the mode. (is the value that occurs most often.) Continuous Measures Measures that reflect the intensity of a concept by assigning values that can take on any value along some scale range. ◗ i.e. measuring sales for each salesperson by using the dollar amount sold, is example of continuous measure. ◗ The way to know the difference in values been selected. Mathematical and Statistical Analysis of Scales

12 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 13–12 Index Measures Constructs as concepts that require multiple variables to measure them sufficiently. Attributes Single characteristics or fundamental features that pertain to an object, person, or issue. i.e., consumer’s attitude toward some product is usually a function of multiple attributes. Multi- item instruments for measuring a construct are called index measures, or, composite measures. index measures An index assigns a value based on how much of the concept being measured is associated with an observation. Indexes often are formed by putting several variables together. ◗ i.e., social class index might be based on three weighted variables: occupation, education, and area of residence. Composite measures Assign a value to an observation based on a mathematical derivation of multiple variables. ◗ i.e., salesperson satisfaction may be measured b combining questions such as: "How satisfied are you with your job?” How satisfied are you with your location? ”How satisfied are you with opportunity your job offered?”

13 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 13–13 Computing Scale Values Summated Scale A scale created by simply summing (adding together) the response to each item making up the composite measure. Reverse Coding Means that the value assigned for a response is treated oppositely from the other items.

14 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 13–14 Computing Scale Values Exhibit of “how much the consumer trust a website”. This is composite represents a summated scale. Summated Scale A scale created by simply summing (adding together) the response to each item making up the composite measure. Reverse Coding Means that the value assigned for a response is treated oppositely from the other items. Example of Computing a Composite Scale

15 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 13–15 Three Criteria for Good Measurement SensitivitySensitivity ReliabilityReliabilityValidityValidity Good Measurement

16 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 13–16 Reliability The degree to which measures are free from random error and therefore yield consistent results. An indicator of a measure’s internal consistency. ◗ i.e., a student who score 80% on the first test should score close to 80% on all subsequent tests. (otherwise their a lack of reliability on the tests OR the student not peppering the same each time) Internal Consistency Represents a measure’s homogeneity or the extent to which each indicator of a concept converges on some common meaning. Measured by correlating scores on subsets of items making up a scale. ◗ It’s an attempt to measure trustworthiness may require asking several similar but not identical questions, as shown in exhibit “how much the consumer trust a website”.

17 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 13–17 Internal Consistency Split-half Method A method for assessing internal consistency by checking the results of one-half of a set of scaled items against the results from the other half. Coefficient alpha (α) The most commonly applied estimate of a multiple item scale’s reliability. Represents the average of all possible split-half reliabilities for a construct. ◗ scales with (α) between 0.80 - 0.95 are considered to have very good reliability. ◗ scales with (α) between 0.70 - 0.80 are considered to have good reliability. ◗ scales with (α) between 0.60 - 0.70 indicates fair reliability. ◗ if the (α) is below 0.60, the scales has poor reliability.

18 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 13–18 Test-Retest Reliability Test-retest Method Administering the same scale or measure to the same respondents at two separate points in time to test for stability. Represents a measure’s repeatability. Measures of test-retest reliability pose two problems: The pre-measure, or first measure, may sensitize or alert the respondents and subsequently influence the results of the second measure. Also if the time between measures is long, there may be an attitude change or other maturation of the subjects.

19 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 13–19 Validity The accuracy of a measure or the extent to which a score truthfully represents a concept. ◗ Does a scale measure what was intended to be measured?  i.e., effort may well lead t performance but effort probably does not equal performance Establishing Validity: To asses validity, the researcher attempt to provide some evidence of a measure’s degree of validity by answering a variety of questions: Is there a agreement or consensus that the scale measures what it is supposed to measure? Does the measure correlate with other measures of the same concept? Does the behavior expected from the measure predict actual observed behavior?

20 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 13–20 Validity (cont’d) Four basic approaches to establishing validity: Face Validity A scale’s content logically appears to reflect what was intended to be measured. ◗ When the test items match the definition, the scale has face validity. (It refer to subjective agreement.) Content Validity The degree that a measure covers the breadth of the domain of interest. ◗ E.g., if the exam is supposed to cover chapter 1-5, it is fair for students to expect that questions should come from all 5 chapters, rather than just one or two. It is also fair to assume that the questions will not come from chapter 6. Criterion Validity The ability of a measure to correlate with other standard measures of similar constructs or established criteria. ◗ It address the questions “ How well does my measure work in practice? ◗ Criterion validity is sometimes refereed to as pragmatic validity in other words “is my measure practical? Construct Validity Exists when a measure reliably measures and truthfully represents a unique concept.

21 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 13–21 Validity (cont’d) Construct validity consists of several components including: 1. Face Validity 2. Content Validity 3. Criterion Validity 4. Convergent Validity 5. Discriminant Validity Convergent Validity Another way of expressing internal consistency; highly reliable scales contain convergent validity. I.e. in business we believe customer satisfaction and customer loyalty are related. If we have measure of both, we would expect them to be positively correlated. If we found no significant correlation between our two measures, it will be no convergent validity of these measures. Discriminant Validity Represents how unique or distinct is a measure; a scale should not correlate too highly with a measure of a different construct. I.e., customer satisfaction measure should not correlate too highly with the loyalty measure if the two concepts are truly different. If the correlation are too high, we have to ask if there are different concepts or satisfaction & loyalty actually one concept.

22 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Reliability versus Validity 13–22 Reliability is a necessary but not sufficient condition for validity. A reliable scale may not be valid. i.e., a purchase intention measurements technique may consistently indicate that 20% of those sampled are willing to purchase a new product. Whether the measurement is valid depends on whether 20% of the population actually purchased the product.

23 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. 13–23 Sensitivity A measurement instrument’s ability to accurately measure variability in stimuli or responses. ◗ For example adding “very satisfied”, “satisfied”, “neither satisfied nor dissatisfied”, “not satisfied”, and “not satisfied at all” will increase the scale’s sensitivity. Generally increased by adding more response points or adding scale items.


Download ppt "13 ©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part."

Similar presentations


Ads by Google