Download presentation
Presentation is loading. Please wait.
Published byJason Knight Modified over 7 years ago
1
13. Survey Research Part 1 This is a PowerPoint Show
Open it as a show by going to “slide show”. Click through it by pressing any key. Focus & think about each point; do not just passively click. © Dr. David J. McKirnan, 2014 The University of Illinois Chicago Do not use or reproduce without permission Center for Epidemiologic Studies National Institute of Mental Health
2
We will address five topics in the Survey modules:
13. Survey Research Part 1 We will address five topics in the Survey modules: Part 1 will address the first four. Topic areas & formats General issues in Survey research Sources of bias (or fraud…) in survey research Examples of surveys Testing Hypotheses with surveys
3
✓ Topic areas & formats Survey Research
General issues in Survey research Sources of bias (or fraud…) in survey research Examples of surveys From:
4
What do surveys measure?
Knowledge Information re: current events, political or consumer choices Awareness: e.g., of Public health resources, government decisions... Attitudes and Beliefs Preferences or evaluations: e.g., attitudes toward racial or ethnic groups, consumer preferences... Beliefs about political or social events: “which party provides the strongest security for the U.S….?” Feelings or moods: quality of life, depression / anxiety, marital satisfaction... Behavior Behavioral intentions; Intent to vote, financial plans, exercise goals. Self-reports of previous or on-going behavior; voting in the last election, alcohol and drug use, exercise patterns.
5
Survey research; General uses of surveys
Survey methods have a wide range of applications, from single-item consumer satisfaction (“How useful did you find this web site”)… …to full-fledged, theory-driven behavioral research. For convenience we will consider 5 categories: Descriptive research Testing hypotheses Testing the generalizability of experimental results Predicting an event or outcome Pragmatic / applied questions
6
Uses of surveys; descriptive research
Epidemiology is the study of how behaviors, disease states, or similar issues are distributed across the population. Epidemiology uses many methods, such as standard crime or disease reporting. Even Google search data can be used to track topics such as interest in healthier foods. Number of Google searches for “gluten” and “probiotic” over time. (Click image for article)
7
Uses of surveys; descriptive research
Epidemiology is the study of how behaviors, disease states, or similar issues are distributed across the population. Epidemiology uses many methods, such as standard crime or disease reporting. Google Trends and other sites are valuable adjuncts to health monitoring for tracking, e.g., heroin overdose. Search data can also be used for less serious reasons…
8
? Counting Google searches to assess social interest
Does the Chicago Cubs’ World Series quest inspire national interest? Google Search data during the MLB Division playoffs, 2016: Interest in the LA Dodgers is limited to California & Nevada; Interest in Cleveland is limited to Ohio; Interest in the Toronto Blue Jays is limited to Canada; The Cubs truly arouse national attention. ?
9
Uses of surveys; descriptive research
Epidemiology is the study of how behaviors, disease states, or similar issues are distributed across the population. Epidemiology uses many methods, such as standard crime or disease reporting. Google Trends and other sites are valuable adjuncts to health monitoring for tracking, e.g., heroin overdose. Epidemiological studies often use direct survey methods, such as phone or face-to-face survey interviews. Knowledge of, e.g., how to access health care… Feelings or moods, such as the rate and distribution of depression… Behavioral patterns, such as alcohol or drug use or gun ownership… Or archival data, such as disease reporting.
10
The origins of epidemiology
Dr. John Snow’s Cholera map & the closing of the Broad St. pump. In London of 1854 a cholera outbreak raged through several poor neighborhoods of London. Sewage and other effluvia that ran through gutters created a dreadful smell (a “miasma”) that was blamed for the outbreak. The concept of infectious disease transmission through water supplies was not well understood. Click for Wikipedia article. Dr. John Snow, one of the physicians charged with stopping the epidemic, noted a particularly fetid cesspool in front of 40 Broad St., proximal to a water pump used by the neighborhood. He decided to empirically map the cholera cases in the area. He proposed that water from the pump, not the miasma, was the cause of the outbreak. He was was generally disbelieved, but convinced the town governors by his evidence. Only much later would tracking of disease outbreak be labeled ‘epidemiology’. Image:
11
The origins of epidemiology
Dr. John Snow’s Cholera map & the closing of the Broad St. pump. Snow’s map showed the bulk of cases to be near the pump at 40 Broad St…. …and to radiate out from there. As he noted in his 1855 book: Click for Ted talk by Steven Johnson. "I had an interview with the Board of Guardians of St. James's parish, on the evening of Thursday, 7th September, and represented the above circumstances to them. In consequence of what I said, the handle of the pump was removed on the following day.” S. Johnson, The Ghost Map (2007), Riverhead Books. By carefully describing the distribution of cases and the circumstances around the pump, Snow was able to empirically demonstrate a likely cause. His hypothesis was supported by the epidemic quickly subsiding once the pump handle was removed.
12
Uses of surveys; descriptive research
Epidemiology Political / social description is what we often think of as surveys. Opinion polls about society, the government, or current events, e.g. Gallup Polls, or systemic studies by Pew Memorial Trust. The Consumer Confidence Index is a highly standardized poll that is used for basic economic decision making. The Census, of course, is our national information source.
13
Uses of surveys; descriptive research
Epidemiology Political / social description. Testing hypothesis Assessing blocking variables We often assess blocking variables to test how a given attitude or behavior varies across important social groups. e.g., gender, age group, ethnicity, geographic location…
14
Uses of surveys; descriptive research
Testing hypothesis Assessing blocking variables Here is a 15-year trend in trust in political leaders, blocked by self-reported political affiliation Click for poll
15
Uses of surveys; descriptive research
Epidemiology. Political / social description. Testing hypothesis Assessing blocking variables We often assess blocking variables to test how a given attitude or behavior varies across important social groups. Correlational studies A key form of analysis is examining the association among different variables. e.g., what are the correlates of dieting…
16
Uses of surveys; descriptive research
Testing hypothesis Assessing blocking variables Correlational studies A key form of analysis is examining the association among different variables. The theoretical model we have been using is a good example of a correlational study framework. In a survey we may develop measures of each construct, and test the model through correlation analyses.
17
Uses of surveys; descriptive research
Epidemiology Political / social description. Testing hypothesis Assessing blocking variables Correlational studies Examine generalizability of experimental results E.g., The Consumer Reports survey on therapy we discussed in quasi-experiments Predict event or outcome; E.g., election polling Pragmatic / applied, E.g., marketing, or consumer surveys.
18
Who do we want to generalize to?
Surveys; populations Who do we want to generalize to? Our sampling frame is based on our hypothesis or empirical question. Sampling: breadth internal validity tradeoff Key dimensions: Demographic ethnic / age / gender groups, “all Americans”… Behavioral “likely voters”, alcohol users, home buyers... Self-identification Republicans / Democrats, “students”… See Design and sampling overview See also: diminishing validity of political polling.
19
How do we ask questions in a survey?
Survey questions (“items”): operational definition of your phenomenon. Cast topic (stress, “morality”, attitude…) as specific, concrete statement or question. Similar to Dependent Variable in an experiment. “Closed-ended” items: Rating scale, agree / disagree, checklist… Highly structured, specific question; “top-down”. Clearest operational definition. “Open-ended” Fill in the blank, open writing, listing… More general & person based; “bottoms-up”, More difficult to capture specific phenomenon.
20
Question Formats: Closed-ended items
Specific rating scale or a highly structured prompt. These are most reliable for concrete, specific behaviors or attitudes. An attitude can be assessed in several ways: Direct (face valid) assessment Research methods is a wonderful course… Does not strongly agree at all agree Behavioral (content valid) indictors How many times this semester have you skipped class? ______ How many hours per day do you spend reading the material? ______ 2.5 Researchers typically use the Mean (…average) of several related items to create an attitude scale. Scales are more reliable measures than are single items.
21
“Closed-ended” items, cont.
Example: The Center for Epidemiological Studies Depression inventory (CES-D). The M score of these 9 items is often used as a depression scale… Moods & Feelings Below is a list of different feelings. Circle the number that shows how many days you felt each of these over the PAST WEEK. Rarely or A Little A moderate Most or all of none of of the Time amount of the time the time the time (less than 1 day) (1 or 2 days) (3 - 4 days) (5 - 7 days) I was bothered by things that usually do not bother me. I felt I could not shake off the blues even with help from my friends or family. I had trouble keeping my mind on what I was doing. I felt depressed I felt that everything I did was an effort My sleep was restless I was happy I enjoyed life I felt sad Sum of item ratings / 9 These items are “reversed” in the final score
22
Moods & Feelings Or we may count the number of symptoms.
“Closed-ended” items, cont. Or we may count the number of symptoms. A scale like this typically has a cut point: Moderate depression is defined as 4+ symptoms. Moods & Feelings Below is a list of different feelings. Circle the number that shows how many days you felt each of these over the PAST WEEK. Rarely or A Little A moderate Most or all of none of of the Time amount of the time the time the time (less than 1 day) (1 or 2 days) (3 - 4 days) (5 - 7 days) I was bothered by things that usually do not bother me. I felt I could not shake off the blues even with help from my friends or family. I had trouble keeping my mind on what I was doing. I felt depressed I felt that everything I did was an effort My sleep was restless I was happy I enjoyed life I felt sad Center for Epidemiologic Studies National Institute of Mental Health # of symptoms: items rated 2 or 3
23
“Closed-ended” items: standard scales
Standard scales like the CES-D are used in many studies & diverse populations, so their properties are well understood. Scores on the CES-D have been shown to predict scores on longer and more systematic depression inventories. This allows a brief, survey-sized measure to be used as an indicator of depression. As we have seen, the scale can yield Continuous measures of depression, Mean (average) ratings, or symptom counts raging from 1 to 9, …good for correlational analyses (how strongly does depression correlate with drug use…). Or a categorical measure (depressed – non-depressed), based on the scale cut-point, ...appropriate for quasi-experiments that, e.g., compare the depressed to the non- depressed group on a variable such as risk taking.
24
Evaluating our measures: Reliability and Validity.
If we are assessing a stable characteristic (IQ, personality, temperament, core values…) participants should show similar scores over time, or even across the life span. If they do not, our measure may be unreliable. For a more transient characteristic (mild depression) we expect scores to change over time. Validity Our survey or scale must actually measure what we designed it to. There are several ways we think about validity, each getting at a different element…
25
Test - retest; Are responses consistent over time?
Tests for Reliability Test - retest; Are responses consistent over time? Assume a stable attribute; e.g., “temperament”. If the measure is reliable, participants should show similar scores across time, e.g., at baseline and after a year. If scores do change over time, there are two possibilities: Our measure is not reliable – there is a lot of “error” variance in scores – so we should reconsider how we are assessing the construct. The very (hypothetical) construct we are studying may not be valid (e.g., “temperament” may not be a single, stable construct…), and we should reconsider our theory. Split-half; Do different sets of items give use the same scores? Multi-item scales are designed with overlapping or converging items. If a 10-items scale such as the depression inventory is reliable, the first 5 items should yield scores similar to the last 5.
26
Descriptive research: Validity
Face validity A scale or item appears “on its face” to measure what it is designed to: “How dependent are you on heroin?” “How much did you enjoy…”. An item is “intuitively valid”; clearly addresses the topic. A strongly face valid item (heroin use…) may push for socially desirable responses (“No no, not me…”). Content validity Assesses all key components of a topic or construct: Simple skill index; assess computer skills by having a job applicant write a simple program “Depression” may be best assessed by measuring Knowledge, Attitudes & Moods, and Behavior. A content valid scale would address all of these. Exams; for research methods we should test all the core skills for research design… Predictive validity Validly predicts a hypothesized outcome: e.g., I.Q. is a moderately good predictor of college & job success, criminality, etc. The “Big Five” personality inventory is a moderate to good predictor or responses to different challenges or social contexts.
27
Do people who use predictive tests in business know or care about their scientific validity?
Some widely used predictive measures – that themselves have become big business – predict little or nothing. The Myers-Briggs Type Indicator (MBTI) is used to predict job success in 89 of the Fortune 100 companies. Knowing which of the 16 MBTI “types” you are may feel good to you and your prospective boss, but has no reliability or validity. Click for a good overview of reliability & validity, and the MBTI.
28
Descriptive research: Validity (2)
Construct validity Test whether the hypothetical construct itself is valid (differs from other constructs, corresponds to measures or outcomes it should..). E.g.; “anxiety” and “depression” and “anger” may not be separate constructs, but may all be part of “negative affectivity”. Test if the Measure addresses the construct it was designed for e.g., measures of social support (“do you have people who care for you”) often strongly influenced by depression, a separate construct… “Ecological” validity Measure corresponds to how the construct “works” in the real world External validity of assessment device. See Focus Module 10, for External / Ecological Validity.
29
The scale also has high Split- Half and Inter-item reliability.
“Closed-ended” items: standard scales Because the CES-D has multiple items developed over many studies the scale has high Test- Retest Reliability. Scores taken at one time generally are consistent with scores from another time. The scale also has high Split- Half and Inter-item reliability. Different items correlates highly with other (sets of) items. The scale has high Face Validity… Items clearly “map onto” or directly assess the concept of depression. … Content Validity… Different aspects of depression – self perception, behavior, sleep – are all assessed. and Predictive Validity... Scores correlate with (predict...) both larger, clinic-based assessments of depression and depresson-related outcomes such as job performance.
30
The scale also has high Split- Half and Inter-item reliability.
“Closed-ended” items: standard scales Because the CES-D has multiple items developed over many studies the scale has high Test-Retest Reliability. Scores taken at one time generally are consistent with scores from another time. The scale also has high Split- Half and Inter-item reliability. Different items correlates highly with other (sets of) items. The scale has high Face Validity… … Content Validity… and Predictive Validity... ...but mixed Construct Validity: Depression scores tend to correlate with scales of anxiety, anger, and social isolation. This may be a meaurement problem: survey measure of any negative mood correlates with other negative mood scales, Or that the hypothetical construct of depression overlaps with other constructs such as anxiety and anger.
31
Closed-ended items, summary
Chief virtue: clear operational definition of our study variable(s). Items are specific & concrete; we know exactly what the participant is responding to. Face or content valid items can be written that correspond exactly to our empirical questions or hypotheses. Items or scales can be easily tested for internal reliability. Quantitative scales can be used directly for statistical analyses. Chief liability: potential insensitivity. Items are typically brief & simply worded; potentially superficial. Quantitative surveys are “top down”; The participant often does not have the ability to indicate what issues are of personal interest. Unlike in qualitative research, attitude or mood scales may not be sensitive to participants’ personal perspectives. Surveys that mix quantitative with qualitative items can enhance sensitivity, although qualitative items can be difficult to analyze.
32
Survey formats; Open-ended items
Textual / qualitative responses; More sensitive to the participant: What have you enjoyed most about your methods class so far? Please list the three things that first come to mind when you think of Research Methods in Psychology... Highly sensitive measures of, e.g., social support, may have the participant write the initials of the 5 people closest to him/her, then describe the forms of support each provides. This approach is easily combined with quantitative measures, via rating scales for each person… By analyzing themes that emerge in participants open-ended descriptions (e.g., of their voting decisions, what they consider socially supportive...) researchers can discover how important social processes work “on the ground”, rather than imposing a hypothesis or expectation on participants’ answers. As we saw from the module on descriptive research, these open-ended data can be used to develop quantitative measures or analyses.
33
Textual / qualitative responses;
Survey formats; Open-ended items Textual / qualitative responses; More sensitive to the participant More difficult to collect and interpret: People simply do not like to write. “Fill in the blank” or “Please write a brief paragraph…” items often get minimal responses. Social groups differ in their writing ability or comfort. Data may thus be biased by those with higher eduction or more willing to write.
34
Survey formats; Open-ended items
Textual / qualitative responses; More sensitive to the participant More difficult to collect and interpret: People simply do not like to write. Analysis often involves subjective judgments on the part of the researcher. Researchers use computer programs and their own judgment to specify the major themes in a body of textual data, by developing and applying thematic codes. Thematic codes are then used to show associations among different issues or topics, e.g., in specifying the personal characteristics that participants most commonly associate with social support. Since coding must involve researchers’ judgments, it can run the risk of researchers “discovering” in the data the themes they expected to see in the first place. There are, however, a variety of methods researchers use to improve and test the reliability of coding.
35
Personal Safer Sex Guidelines
Mixed survey formats Closed-ended attitude scale Open-ended qualitative description Simple behavioral index. Here is an example of a mixed question format from a survey of women’s sexual practices. Personal Safer Sex Guidelines How strict are your personal guidelines or rules for safer sex (e.g., condom use, “safe relationships,” etc.)? Not at all Somewhat Very Extremely Strict Strict Strict Strict What are your rules for safer sex? Have you ever refused to have sex with someone to stay safe? Never once or a few many twice times times
36
Mixed survey format Mixed survey formats allow researchers to “triangulate in” on a topic or empirical question. Here the researcher is assessing different components of women’s decision making. The general attitude scale assesses participants’ subjective sense of the importance of personal safety. ✓ ✓ The open-ended item allows women to make a personal statement that the researchers may not have expected or assessed. For example, many participants cited “a single close relationship” or “non-sexual relationship” as their primary safety strategy. A proportion of participants wrote in “no rules”, an unanticipated and important response. Of course leaving the item blank cannot be interpreted: does it mean “no rules” or “did not what to write an answer…”.
37
Mixed survey format Mixed survey formats allow researchers to “triangulate in” on a topic or empirical question. Here the researcher is assessing different components of women’s decision making. The general attitude scale assesses participants’ subjective sense of the importance of personal safety. ✓ The open-ended item allows women to make a personal statement that the researchers may not have expected or assessed. The behavioral item provides a concrete index of whether maintaining safety has been problematic enough to influence an important behavior. Responses to these items do not necessarily cohere perfectly, but the combination allows the researcher to more sensitively portray participants’ behavior than a single item or type of item.
38
Marketers have used multi-format surveys to rap with customers for years!
Click to Dig!
39
Note: Comics were first beginning to reflect larger social issues in the late 1960’s. Marketers were genuinely uncertain about emerging topics of interest among their young readers. (“Black people”?) From:
40
Survey topics & item types
S U M M A R Y Surveys assess: Knowledge Attitudes or preferences Ongoing or intended behavior Closed-ended formats Highly structured, easy to analyze Potentially insensitive Open-ended formats More sensitive to the participant Potentially ambiguous or difficult to analyze Surveys typically… Use multiple items Employ several formats. Psychology 242 is a wonderful course… …list the three things that first come to mind…
41
✓ General issues in Survey research Survey Research
Topic areas & formats ✓ General issues in Survey research Sources of bias (or fraud…) in survey research Examples of surveys
42
Forms of survey administration
Self-report questionnaire “Paper and pencil” or internet-based; Primarily closed-ended, structured questions Limited open-ended items Assume at least moderate reading level Cheap & easy to administer Internet: Representativeness very dubious Face-to-face interview “Door step”, formal research center, or telephone Allows in-depth qualitative questions Many studies combine questionnaire & interview formats (Telephone version becoming obsolete) All data collection increasingly computer-based
43
General issues in surveys
Cost / population access Different methods are more / less likely to reach certain populations, e.g.: Disfranchised / poor populations often not reached by internet or telephone Cell phones & avoidance of telemarketers less availability for telephone surveys Stigmatized populations less available for face-to-face interviews, more available via internet. Participant sophistication Participants may not be able to accurately report certain topics Attitudes toward stem cell research from readings. What factors are most important to your choice of political candidate.... Describe the amounts and types of proteins you eat during a typical week... “Rationality bias”; many questions (incorrectly?) assume a rational reason for behavior: Why do you have unsafe sex... What is your chief reason for using alcohol each night…
44
Social Desirability Responding
Clear face-valid items addressing embarrassing topics yield less valid responses How often are you dishonest with your friends? Have you ever cheated on an exam....? High social desirability wording elicits inaccurate responses… Do you support protecting our Nation’s forests for future generations? (Does “yes” mean you an “environmentalist”?). Do you feel there are ways your husband could be closer...? (Does “yes” make you are unhappy in your marriage”?). Populations differ in social desirability responding; that difference may be a confound in studying group differences Women report more suicidal thoughts, but may be more willing to disclose, creating a possible confound… Desirability can be minimized by: Anonymous surveys Assurances of confidentiality Computer administration (no personal interaction) Careful wording / pilot testing of items
45
Social desirability responding
Click image for NY Times article Do people lie on surveys? Men routinely report more sex partners than do women. If the sample is unbiased by gender, number of partners should balance for men & women. Social desirability hypothesis: Women underestimate partners Men overstate partners Much of the difference due to: A high proportion of women who report 1 partner A few men who report many partners. Possible sample bias (confound?) in who responds to such surveys? Click for article from phys.org
46
General issues in surveys: Time Frames
Rare(er) events require a long time frame to assess When was your last doctor’s visit… These questions asses the last time you left a romantic relationship… Longer term recall can be surprisingly unreliable Recall of last doctor visit is highly unreliable when checked against medical records Shorter time frame yields more reliable responding Memory is better for more recent effects “Exit interviews” from medical visits are far more reliable than even 2-week retrospective measures. Current, concrete behaviors are more accurately reported than are behavioral trends. In general, how often do you miss a dose of your medication Let’s go over each of the past 7 days and tell me if you took or missed your medication dose. Less reliable than…
47
General issues in surveys: Question Order
Questions trigger participant’s memory or attention, and can bias questions that follow, e.g.: Do you think Social Security & Medicare payments have kept up with inflation.. Do you favor or oppose Democratic efforts to expand Medicare payments... Bias can be limited by counterbalancing questions. Using different question orders in different versions of the survey. then
48
General Issues Summary
Survey administration Internet increasingly important as self-report method Face-to-face interviews more common in clinical research Time frames & question order can influence responses Population access & sophistication Some groups are difficult to reach Creates threat to External validity Assumption that participants understand survey materials often questionable. Social desirability responding Inhibited responding threatens Internal Validity May represent a confound if groups differ in desirability set.
49
✓ Sources of bias (or fraud…) Survey Research Topic areas & formats
General issues in Survey research ✓ Sources of bias (or fraud…) Examples of surveys
50
Bias / Fraud in survey research
Social research is increasingly important to political & cultural debates. Effects of gay marriage… Political “approval” ratings… Scientific consensus on global warming… Research on working mothers … Pressure for confirmatory results encourages bias or outright fraud (see this week’s article on Opinion Polls). In the study structure Items used Sample In the interpretation of results “Cherry picking” Simple distortion
51
Opposition to gay marriage
Example of a fraudulent survey: Does having a gay/lesbian parent harm children? Opposition to gay marriage In early court cases gay marriage opponents cited religious doctrine as the basis for continued discrimination against GLBTQ marriage. In several cases Judges struck down religious doctrine as a basis for disallowing gay marriage, citing the separation of church and state. Example Opponents thus sought evidence of that gay marriage causes civil harm to justify continued discrimination. The prospect that gay marriage may cause harm to children emerged as a key issue. Multiple well conducted studies here and a policy statement by the American Psychological Association here report no harm to children whose parent(s) are gay. April DeBoer and Jayne Rowse … after closing arguments in their challenge to Michigan’s marriage restriction. (Click image.) Mandi Wright/Detroit Free Press, via Associated Press In marked contrast, a 2012 Survey by Mark Regenerus appeared to show that Children in households with a gay/lesbian parent(s) fare much worse than children of heterosexual parents.
52
Example of fraudulent survey use
The Regenerus survey was cited extensively in a nationally recognized case in Detroit (as well as other cases, up to today…). Gay marriage opponents lost, in part due to the debunking of the Rogenerus study. April DeBoer and Jayne Rowse … after closing arguments in their challenge to Michigan’s marriage restriction. (Click image.) Mandi Wright/Detroit Free Press, via Associated Press Click image for a more sympathetic view of the Rogenerus study Funded & cited widely by gay marriage opponents Study sample and interpretation of results wildly biased Has been disavowed by Renenerus’ own Academic Department and American Sociological Association. Many children classified as being raised in gay-led households did not, in fact, reside with a gay parent for at least a year click.
53
Bias in survey research: Leading or biased items
Sources of survey fraud; question wording can elicit a response desired by the researcher; How much do you support the administrations’ actions to protect you and your children from terrorists… Wording can “normalize” a response, e.g., When do you feel that it is O.K. to cheat on an exam? ..when I really do not know the material .. when others are doing it .. when I think the exam is unfair Vague wording can be interpreted in a biased fashion Is there anything your husband could do to be more intimate with you? “Push” polls: a survey can be used to actually create an attitude.
54
Push Polls: Push polls attempt to induce or change attitudes in the guise of a poll, rather than simply assess attitudes. They change attitudes in two ways: While taking the poll participants are given “information” (typically distorted or even fraudulent ‘facts’) designed to shift opinion, The “results” of the poll are publicized to influence opinion more broadly. Push Poll disinformation example. The 2000 Republican primary in S. Carolina is one of the most extreme examples of using a wildly dishonest (and racist) push poll to poison opinion. As part of a smear program operatives aligned with the G. Bush campaign blanketed the state with ostensible polls regarding John McCain (click the image for an interesting article on those politics).
55
Push Poll disinformation example.
Push Polls: Push Poll disinformation example. McCain, a former prisoner of war in Vietnam and his wife had adopted a child from Mother Theresa’s orphanage in Bangladesh. The push poll took this information and twisted it into these and similar smear questions: 1. Studies have shown that John McCain is mentally unstable because of his time spent in prison camps during the Vietnam War. Would this make you more likely or less likely to vote for him? 2. John McCain has a mixed-race daughter that he had with a black prostitute. Would this make you more likely or less likely to vote for him? There were no “data”: people were simply hired to ask the questions to any voter who reported a preference for McCain. The McCain campaign, leading my a wide margin at the beginning of the South Carolina race, never recovered from these attacks, and went on to lose the nomination.
56
They change attitudes in two ways:
Push Polls: Push polls attempt to induce or change attitudes in the guise of a poll, rather than simply assess attitudes. They change attitudes in two ways: While taking the poll participants are given “information” (typically distorted or even fraudulent ‘facts’) designed to shift opinion, The “results” of the poll are publicized to influence opinion more broadly. Using polls to disseminate misleading information. An effective way to change public opinion is to publish “data” about what most people think. Like it or no, our own views are strongly influenced by the majority. Push polls designed to produce one pre-ordained outcome can influence us by giving a gloss of science to highly manipulative statements of “the majority opinion” or “what most people think”.
57
Using polls to disseminate misleading information.
Push Polls: Using polls to disseminate misleading information. Items are developed that “trap” the participant into endorsing a specific view. When the “data” are released the biased wording is ignored. Publicity about the “findings” (e.g., by politically biased news organizations) are used to further create or change attitudes. Many political & social organizations use this strategy to… Ostensibly measure attitudes objectively Use the “results” to influence popular opinion. Example of a highly biased survey: The Republican National Committee ‘Future of American Health Care’ survey. The survey was distributed in several counties as part of a fund raising letter. It is clearly a “push poll” designed to create fear of health care reform. It got limited distribution, but is a great example of a Push Poll.
58
“GOP health survey” push poll (2009 – 2010)
Some of these items are simple lies – or manipulative statements – designed to induce anti-health care attitudes… Other are powerful (and dishonest) fear manipulations ✓ ✓ ✓ ✓ ✓ ✓
59
Forms of survey bias: Provide leading or emotionally manipulative information to induce an attitude rather than simply measure it, to provide politically useful “data”.. Questions that, if you accept their assumptions, can only be reasonably answered one way…
60
Biased surveys: Democratic example
An example from the Democrats, that is also used for fund-raising.
61
Democratic biased survey (2007)
Manipulative presentation of questionable information Simple emotional manipulation Distorted description that may be changed in presentation of findings “Who could disagree” item.
62
Summary: Manipulating attitudes by surveys
1. Ask manipulative or highly leading questions 2. Find high levels of agreement (and potentially change participants’ attitudes). 3. Publicize – and often distort or overstate – the “findings” via highly biased news sources 4. News reports themselves lead to attitude change among people who are uncertain or uninformed.
63
✓ Examples of surveys Survey Research Topic areas & formats
General issues in Survey research Sources of bias (or fraud…) ✓ Examples of surveys
64
Examples of surveys & data, 1
Consumer reports survey of mental health care question Satisfaction with therapy. Differences between types of therapy. population Self-Identified group: U.S. mental health care users sample Self-selected convenience sample: Readers who got therapy & returned the survey, n=4000 data Attitudes & behavior Self-report questionnaire, cross-sectional findings Descriptive & hypothesis tests High satisfaction for most treatments
65
Examples of surveys & data, 2
“Monitoring the future” youth studies question Social behavior Academics Alcohol & drug use Health. population Demographic group: All U.S. youth, 15 -> 21 years old. sample Random sample: Sample of High School health classes, n=3000 -> 5000. data Knowledge, attitudes and behavior Face--to--face interviews & questionnaires, longitudinal (bi-yearly) findings Mostly descriptive Assess yearly trends/shifts in drugs, grades, emotional well being
66
Examples of surveys & data, 3
Gallup, Time/CNN, other polls question Political opinions, Lifestyle information Social attitudes, e.g., managed care populations Demographic: Eligible voters, Target age groups Self-identified: “Democrats”… Behavioral: Voters, ACA users… General: - U.S. adult population samples National, random Digit dial telephone, n=150 to >500 data Knowledge, Attitudes, Behavior Brief interview, cross-sectional findings Descriptive Ratings of politicians, Consumer preferences Approach to Affordable Care Act
67
Examples of surveys & data, 4
Exit polls question Election outcome, possibly stratified by state / region population U.S. electorate National and/or local electoral district sample Probability Stratified random sample of electoral districts. data Self-reported behavior Self-report interview, cross-sectional findings Descriptive/ predictive Increasingly inaccurate predictions See reading on shifts in use of polling data in U.S. politics
68
Examples of surveys & data, 5
”Social Issues Survey” of Chicago gay / lesbian community question - Stress & coping - Alcohol & drug use - Responses to HIV / AIDS population Self-identified Self-identified gay, lesbian, & bisexual adults in Chicago. sample Targeted multi-frame community newspapers, organizations, & mailing lists, n=3500 data Attitudes & behavior Self-report questionnaire, cross-sectional findings Descriptive & hypotheses - High experience of discrimination - Less stress & alcohol-drug use than expected
69
Examples of surveys & data, 6
National Institute on Drug Abuse Household survey of Alcohol and Drug use question Alcohol-drug use and problems, treatment use, health effects. population National All U.S. adults sample Random Multi-stage: Census tract Household, 3. Any adult member n>4000 data Knowledge, attitudes & behavior Face - to - face Interview, successive cross-sectional (each 5 years) findings Typically descriptive Age & regional differences in substance use, trends over time in use & problems Data often used for hypothesis-oriented secondary analyses (i.e., as archival data).
70
Summary: Testing Hypotheses
Surveys typically use multiple items to measure each hypothetical construct Correlations among items tell us if they are reliable in measuring the same construct. We use Mediating Analyses to Test hypotheses about correlations between constructs Build or test theory Cross-sectional analyses are difficult to interpret Causal direction? 3rd variable problem Longitudinal analyses help us determine causal direction Summary
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.