Download presentation
Presentation is loading. Please wait.
Published byMatilda O’Connor’ Modified over 6 years ago
1
Tell Me More: Data Quality, Burden, and Designing for Information-Rich Web Surveys
Lilian Yahng Derek Wietelman and Lauren Dula Indiana University BACKGROUND METHODS CONCLUSIONS Character count: Paired t-tests on responses (n=187), with and without classification into early (pre-median) or late (post-median) responders. Paired t-tests also applied to each IPE individually, with and without time classification. The 2016 Survey of Interprofessional Educational (IPE) Activities was a census (N=5941) of current faculty and volunteer practitioners in the health sciences schools at Indiana University to gauge involvement in programmatic “interprofessional education” (IPE) over the last three years. RR2=12.3% (15.2% for faculty; 9.2% for volunteer practitioners). Median duration=10.7 min. 71-item Web instrument, consisting chiefly of open-ended questions. (Shorter 10-item path if screened out of IPE activities.) Experimental treatment Additional language inserted before the most text-intensive item (IPE objectives) for each respondent-reported IPE up to 3: Thank you very much for providing this valuable information. (first IPE) Thank you again for taking the time to provide such rich data. It is most appreciated! (second IPE) This information will be most helpful in understanding the breadth of IPE activity at IU. (third IPE transition screen) Once again, thank you. You are almost done! (third IPE) Research questions Does motivating (or explicitly grateful) language on a burdensome survey: reduce breakoffs? reduce missing data? increase response length (character count)? increase number of unique themes and elaborations? (in progress) Breakoffs and missing data: Spot-check for breakoffs before treatment. NB Could only identify item nonresponse on numeric items (not open-ends) due to programming limitation. Also performed unequal variances test for the missing data. Some evidence of encouraging language having an effect on “late” responders, slightly increasing written response length – but only on the second reported IPE. There was no effect on the first reported IPE, possibly because respondents had not yet caught on to the battery pattern or otherwise did not feel overburdened. No effect on their third reported IPE, either, perhaps due to being too fatigued by then for the encouragement to make a difference. Coding in progress. Overall increase in theme number on second IPE but decrease on third, though character counts progressively less. TBD on treatment effect. Intriguing class of “bypasser” respondents – among bottom 10% of character counts but agree to possible follow-up. Not the typical cognitively lazy or learnt underreporting: “Too much detail in this survey.” RESULTS Breakoffs before the treatment (n=289) generally occurred directly after screening out of IPE (n=92). Suspect likely “ineligible” cases, given census included non-faculty volunteers not engaged in programs. Treatment had no significant effect on numeric missing data, regardless whether respondent reported one IPE activity t(177)=.15, p=.87; two IPEs t(47)=-.62, p=.54; or all three IPEs t(32)=-.49, p=.63. No tests proved significant for character count analyses, except slightly for the late responders at the second IPE set: BURDENSOME TASKS Difficulty of tasks: detailed recall and description of hours, session frequency, participant numbers by rank by school, funding, location, objectives, and outcome. Battery repeated up to 3 IPE activities. Less “classic survey” than reporting tool. Used default design skin “optimized” for mobile but tasks essentially for desk/laptop; 14% on smartphone, 5% on tablet. Limited time/resources – qualitative interviews not an option. Given these questionnaire challenges, an opportunity to test efficacy of “motivating” language, following prior research suggesting instruction can help response quality. Group Mean Total Characters (Before Treatment) Mean Total Characters (After Treatment) Differences in means Paired t-test statistic p-val (one-sided) n First Set: Treat Early 209.51 274.56 65.04 -1.149 .1284 45 First Set: Treat Late 160.12 170.82 10.70 -0.273 .3928 57 First Set: Control Early 183.90 325.66 141.76 -1.168 .1242 50 First Set: Control Late 157.43 164.91 7.49 -0.192 .4245 35 Second Set: Treat Early 56.27 73.56 17.29 -1.306 .0992 Second Set: Treat Late 49.02 105.09 56.07 -1.304 .0987 Second Set: Control Early 57.98 63.16 5.18 -0.294 .3851 Second Set: Control Late 49.80 76.54 26.74 -1.817 .0391 Third Set: Treat Early 32.27 41.33 9.07 -1.120 .1344 Third Set: Treat Late 22.00 24.74 2.74 -0.588 .2796 Third Set: Control Early 31.96 22.9 -9.06 1.257 .8926 Third Set: Control Late 38.31 27.09 -11.23 1.663 .9473 REFERENCES Dykema, J., Jones, N., & Stevenson, J. (2013). Surveying clinicians by Web. Evaluation and the Health Professions 36(6), Holland, J. & Christian, L. (2009). The influence of topic interest and interactive probing on responses to open- ended questions in web surveys. Social Science Computer Review 27(2), Smyth, J., Dillman, D., Christian, L., & McBride, M. (2009). Can increasing the size of answer boxes and providing extra verbal instructions improve response quality? POQ, 1-13.
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.