Presentation on theme: "Forrest V. Morgeson III, Ph.D. Barbara Everitt Bryant, Ph.D."— Presentation transcript:
1 Forrest V. Morgeson III, Ph.D. Barbara Everitt Bryant, Ph.D. Does Interviewing Method Matter? Comparing Consumer Satisfaction Results across Internet and RDD Telephone SamplesForrest V. Morgeson III, Ph.D.Director of Research, American Customer Satisfaction IndexBarbara Everitt Bryant, Ph.D.Research Scientist-Emerita, University of MichiganReg BakerPresident, Market Strategies InternationalPresented at the 66th Annual American Association for Public Opinion Research Conference
2 Discussion Agenda Overview: Research Questions and Findings The American Customer Satisfaction Index (ACSI)Extant Research on Interviewing Method DifferencesData and Analysis MethodsResults and FindingsConclusions and Implications
3 Research Questions and Findings Research Questions: Does interview method matter? Do the results produced in a multi-industry consumer satisfaction study differ significantly across a sample collected through RDD/probability sampling and telephone interviewing, and one collected via online panel/nonprobability sampling and Internet interviewing?Research Design: We utilize a multi-method sample of consumer satisfaction data, structural equation modeling techniques, and two tests of difference to investigate the significance of differences in survey responses across samples drawn and interviewed using these two methodsFindings: While some differences are observed, interview method only marginally impacts the means of the survey responses or the parameter estimates from the structural models. Overall, the findings suggest that mixed-method interviewing is feasible and reliable for consumer-oriented survey research projects
4 Discussion Agenda Overview: Research Questions and Findings The American Customer Satisfaction Index (ACSI)Extant Research on Interviewing Method DifferencesData and Analysis MethodsResults and FindingsConclusions and Implications
5 Overview of the ACSIEstablished in 1994, ACSI is the only standardized measure of customer satisfaction in the U.S. economy, covering approximately 225 companies in 45 industries and 10 economic sectors; companies measured account for roughly one-third of the U.S. GDP100+ departments and agencies of the U.S. federal government also measured on an annual basis, along with local and state government measuresResults from all surveys are published monthly in various media and on the ACSI website,
6 Structure of the ACSI National ACSI Utilities Information Accommodation &Food ServicesE-BusinessPublic Administration/GovernmentFinance & InsuranceTransportation & WarehousingHealth Care & Social AssistanceManufacturing/ Durable GoodsManufacturing/ Nondurable GoodsRetail TradeE-CommerceEnergy UtilitiesNewspapersMotion PicturesBroadcasting TV NewsSoftwareFixed Line Telephone ServiceWireless Telephone ServiceCable & Satellite TVHotelsLimited-ServiceRestaurantsFull-ServiceNews & InformationPortals/SearchEnginesSocial NetworkingLocal GovernmentFederal GovernmentBanksLife InsuranceHealthInsuranceProperty &CasualtyAirlinesU.S. Postal ServiceExpress DeliveryHospitalsPersonal ComputersElectronics (TV/VCR/DVD)Major AppliancesAutomobiles & LightVehiclesCellular TelephonesFood ManufacturingPet FoodSoft DrinksBreweriesCigarettesApparelAthletic ShoesPersonal Care & Cleaning ProductsSupermarketsGasoline StationsDepartment & Discount StoresSpecialty Retail StoresHealth & Personal Care StoresRetailBrokerageTravel
7 The ACSI Model and Methodology • In ACSI methodology, customer satisfaction is imbedded in a system of relationships, and analyzed as part of a structural equation model. The model produces two critical pieces of data useful to researchers and firms/agencies:• The model provides mean scores (on a scale) for each measured composite or latent variable• The model provides parameter estimates (or path coefficients) indicating what most strongly influences satisfaction, and in turn how satisfaction influences future consumer behaviorsPerceivedQualityCustomerComplaintsOverallCustomizationReliabilityCustomerSatisfactionPerceivedValueComplaint BehaviorPrice Given QualityQuality Given PriceCustomerExpectationsSatisfactionComparison w/ IdealConfirm/DisconfirmExpectationsCustomerLoyaltyOverallCustomizationReliabilityRepurchase LikelihoodPrice Tolerance(Reservation Price)
8 ACSI Data CollectionEach year, including all private sector, public sector and custom research projects, ACSI collects approximately 125,000 interviews of consumersFrom 1994 through 2009, nearly all of this data (with a few exceptions for e-commerce companies) was collected over the telephone using random-digit-dial probability sampling and CATIBeginning in 2010, and following pilot testing that produced promising results, ACSI moved to a multi-method interviewing approach, with roughly half the data for any measured company/government agency collected using RDD probability sampling and CATI, and the other half collected using a nonprobability panel of double opt-in respondents interviewed online
9 Discussion Agenda Overview: Research Questions and Findings The American Customer Satisfaction Index (ACSI)Extant Research on Interviewing Method DifferencesData and Analysis MethodsResults and FindingsConclusions and Implications
10 Extant ResearchWhile a handful of studies comparing results for samples interviewed online to samples interviewed over the telephone exist,* these studies have focused almost exclusively on political opinions, voter preference, etc.There remains very little research into what differences (if any) are likely to be observed across these two interviewing methods for consumer-oriented data, where a significant portion of data collection and survey research is focused*Chang, L. and J.A. Krosnick (2009). “National Surveys via RDD Telephone Interviewing Versus The Internet: Comparing Sample Representativeness and Response Quality,” Public Opinion Quarterly, 73(4), 641–678.Fricker, S., M. Galesic, R. Tourangeau and T. Yan (2005). “An Experimental Comparison of Web and Telephone Surveys,” Public Opinion Quarterly, 69(3),Vannieuwenhuyze, J., G. Loosveldt and G. Molenberghs (2010). “A Method for Evaluating Mode Effects inMixed-Mode Surveys,” Public Opinion Quarterly, 74(5),
11 Findings from the AAPOR Online Task Force Findings from the AAPOR Online Task Force* suggest that there is no theoretical basis for assuming that samples drawn from nonprobability online panels are representative of a larger population, and that therefore results may differ when compared to an RDD probability sample interviewed over the telephoneHowever, this research also concludes there may be instances in which online panels are useful and reliable, and we conduct a series of empirical tests to see if customer satisfaction data (ACSI) is such a case*Baker, R. et al. (2010). “Research Synthesis: AAPOR Report on Online Panels,” Public Opinion Quarterly, 74(4), 711–781.
12 Discussion Agenda Overview: Research Questions and Findings The American Customer Satisfaction Index (ACSI)Extant Research on Interviewing Method DifferencesData and Analysis MethodsResults and FindingsConclusions and Implications
13 Research QuestionsFrom the perspective of the ACSI project and its methodology, two questions regarding multi-method interviewing are most relevant and important:Do mean scores exhibit significant differences between a sample interviewed online when compared to a sample interviewed using RDD/CATI?Do model parameter estimates exhibit significant differences between a sample interviewed online when compared to a sample interviewed using RDD/CATI?
14 DataTo seek answers to our research questions, we utilize a sample of data consisting of approximately 9000 interviewsRoughly half of these cases were collected via Internet interviewing (from a sample balanced to Census demographics from a large online panel (the Research Now panel)), and the other half collected using RDD and CATI, allowing us to test the similarities/differences produced by these two interviewing methodsThe ACSI model (shown earlier) was estimated independently for each industry and each interviewing method, producing distinct mean scores and estimates (path coefficients) facilitating these comparisons
15 DataThe data represent consumer responses to questions measuring satisfaction (and the other modeled variables) with companies and industries in six NAICS sectors (for more information on the companies included in the sample, see Appendix A):Apparel manufacturing (Manufacturing/nondurable goods)Personal computers (Manufacturing/durable goods)Fast food restaurants (Food services)Insurance (Finance and insurance)Supermarkets (Retail)Wireless phone service (Information)
16 Tests of DifferenceTo test for significant differences in mean scores across the two interviewing methods for each ACSI variable in each of the industries included in the sample, independent sample t-tests were utilizedTo test for significant differences in parameter estimates for the structural model for each of the industries included in the sample, chi-square difference tests were utilized, with parameters constrained to equality and significant chi-square statistics indicative of significant parameter estimate differences
17 Discussion Agenda Overview: Research Questions and Findings The American Customer Satisfaction Index (ACSI)Extant Research on Interviewing Method DifferencesData and Analysis MethodsResults and FindingsConclusions and Implications
18 Results and FindingsAcross all of the tests – which included comparisons of 36 sets of mean scores across the two interviewing methods, and 54 sets of model parameter estimates – some significant differences were observedIn total, 36% of the mean scores (13 of 36) compared across the two modes exhibited significant differences. Scores skewed higher on the Internet, with 9 of 13 significant differences reflecting “better” ratings among Internet respondents (i.e. higher ratings, fewer complaints)Moreover, 39% of the model parameter estimates (21 of 54) from the structural models compared across the two methods exhibited significant differences(Two industry examples follow. All test results provided in Appendix A)
19 Example 1: Supermarket Industry Results TelephoneInternetSig. Diff.VariableNMeanExpectations78479.0879080.24Quality80.4379.43Value78376.5477.34Satisfaction76.3875.59Comp. (%)78210.8778810.53Loyalty76.3778682.60***Supermarket IndustryPath CoefficientTele.InternetSig. Diff.Expect. → Quality0.7760.833Quality → Value0.5280.629Expect. → Value0.1960.111Value → Sat.0.4440.481Quality → Sat.0.3720.505**Expect. → Sat.0.1950.051Sat. → Comp.-0.286-0.308Comp. → Loyalty0.045-0.016Sat. → Loyalty0.6160.638For the tests for this industry, one variable mean score of the six tested was significantly different across the two samples, while two of nine parameter estimates were significantly different*All variables scaled 0-100, worse to better rating; “Sig. Diff.” column reports significant difference between theTelephone and Internet interview samples ; * = p<.05; ** = p<.01; ***p<.001.
20 Example 2: Wireless Industry Results TelephoneInternetSig. Diff.VariableNMeanExpectations47575.6949080.94***Quality47875.8849378.80*Value47072.7048871.54Satisfaction71.1749271.20Comp. (%)30.9548521.65**Loyalty47369.3146274.14Wireless IndustryPath CoefficientTele.InternetSig. Diff.Expect. → Quality0.7750.56**Quality → Value0.850.998Expect. → Value0.042-0.058Value → Sat.0.4570.529Quality → Sat.0.480.476Expect. → Sat.0.0530.005Sat. → Comp.-0.621-0.601Comp. → Loyalty-0.033-0.037Sat. → Loyalty0.9420.96For the tests for this industry, four of the variable mean scores exhibited significant differences, with scores skewing higher (and complaint rate lower), and two of the parameter estimates exhibited significant differences*All variables scaled 0-100, worse to better rating; “Sig. Diff.” column reports significant difference between theTelephone and Internet interview samples ; * = p<.05; ** = p<.01; ***p<.001.
21 Results and FindingsThe above are “hard tests” of multi-method interviewing. As many projects (including ACSI) have not traded telephone-only for Internet-only interviewing, a “fairer” test is to compare the telephone interview results to the mixed-method, mixed-sample resultsFor these tests, the results are more promising. Looking only at differences in mean scores, of the 36 sets of means compared only 11% (4 of 36) exhibited significant differences(Two industry examples follow. Full results for these tests are included in Appendix A)
22 Example 3: Mixed-Sample vs. Telephone-Only Mixed-SampleTelephoneSig. Diff.NMeanApparel IndustryExpectations95783.9947584.14Quality85.1286.33Value95882.2884.05Satisfaction81.2683.16*Comp. (%)9551.470.63Loyalty95079.7847379.52PC Industry115683.5155682.94115782.4081.44115382.4955382.3578.8177.78114712.6415.91115874.0555771.76*All variables scaled 0-100, worse to better rating; “Sig. Diff.” column reports significant difference between theTelephone and Internet interview samples ; * = p<.05; ** = p<.01; ***p<.001.
23 Discussion Agenda Overview: Research Questions and Findings The American Customer Satisfaction Index (ACSI)Extant Research on Interviewing Method DifferencesData and Analysis MethodsResults and FindingsConclusions and Implications
24 ConclusionsWhile some differences in both mean scores and model parameter estimates are exhibited when comparing telephone-only interviewing to Internet-only interviewing, the differences account for a minority in both casesThe results are even more promising when comparing mean scores for telephone-only and mixed-method interviewing; only a small fraction of the comparisons are significantly different in this case
25 Implications and Future Research These tests provide evidence for the feasibility and reliability of mixed-method sampling for consumer-oriented survey research projectsFor projects working with this kind of data, both means scores and model estimates appear to be relatively stable across interviewing methodsHowever, because we examine only consumer-oriented data, those working with dissimilar types of data should perform tests similar to ours to examine the reliability of mixed-method interviewing, as results may varyResearch expanding the types of data tested should help market researchers determine the feasibility of multi-method interviewing for particular client engagements
26 Appendix A: Supplemental Results and Information
27 Interview Data by Industry/Company CompaniesApparelLiz Claiborne; VF Corporation; Levi Strauss; Jones Apparel Group; HanesbrandsPersonal ComputersCompaq; Apple; Hewlett Packard; Dell; AcerFast FoodWendy’s; KFC; Little Caesar Enterprises; Domino’s; Taco Bell; Pizza Hut; Burger King; McDonald’s; Papa John’s; StarbucksInsuranceFarmer’s Group; Allstate; State Farm; Geico; Progressive; MetLife; Prudential; New York Life; Northwestern Mutual LifeSupermarketsPublix; Winn-Dixie; Supervalu; Safeway; Wal-Mart; Kroger; Whole FoodsWireless ServiceVerizon; AT&T; Sprint Nextel; T-Mobile
28 Apparel and PC Industries Results TelephoneInternetSig. Diff.NMeanApparel IndustryExpectations47584.1448283.83Quality86.3383.93*Value84.0548380.54**Satisfaction83.1679.39Comp. (%)0.634802.29Loyalty47379.5247780.05PC Industry55682.9460084.0381.4460183.2855382.3582.6377.7879.7615.915949.6055771.7676.17Path CoefficientTele.InternetSig. Diff.Apparel IndustryExpect. → Quality0.6250.778**Quality → Value0.7210.847Expect. → Value-0.031-0.020Value → Sat.0.4490.350*Quality → Sat.0.4150.553Expect. → Sat.0.0690.051Sat. → Comp.-0.034-0.057Comp. → Loyalty-0.2160.014Sat. → Loyalty0.7720.908PC Industry0.6900.6360.9240.027-0.1040.3970.4190.5510.6300.074-0.041-0.547-0.475-0.016-0.0020.9711.155*All variables scaled 0-100, worse to better rating; “Sig. Diff.” column reports significant difference between theTelephone and Internet interview samples ; * = p<.05; ** = p<.01; ***p<.001.