Presentation on theme: "Good Science vs Bad Science in Speech-Language Pathology Understanding the Difference and the Implications With assistance from C. Sagan, M. Shermer, S.J."— Presentation transcript:
Good Science vs Bad Science in Speech-Language Pathology Understanding the Difference and the Implications With assistance from C. Sagan, M. Shermer, S.J. Gould, Eric Hoffer, J.P. Baker, J. Damico, and others.
Case Study: Autism CLAIMS There is an epidemic in autism Mercury poisoning causes autism Vaccines cause autism (due to mercury poisoning) Measles, Mumps, Rubella Immunizations cause autism (Wakefield’s “leaky gut”)
Assertions Over 20 years, a dramatic increase in the reported incidence of a severe neurodevelopmental disorder – AUTISM 1: 2500 → 1: 88 Designated as “a mysterious upsurge” and “a baffling outbreak” in prevalence resulting in AN AUTISM EPIDEMIC. From a Medical Perspective or a Psychological Perspective there was some surprise and various attempts at explanation: Colitis/”leaky gut”/ gastrointestinal dysfunction- Wakefield Environmental Poisonings – Lead/ Methylmercury Vaccinations – Thimerosal Early diagnosis Better tools and diagnostic procedures
Autism Epidemic Public outcry in mass media California DDS 1987-1998 increased by 273% M.I.N.D. Institute: “No evidence a loosening in the diagnostic criteria contributed to increased number of autistic served by DDS Autism Society of America number of students with autism jumped 1,354% from 1991-1992 to 2000-2001 Gave an impetus for the urgency of a more curative approach. Epidemics connote danger
HOWEVER, with regard to the Autism Epidemic It is Not true Nothing that has occurred in incident change was unpredicted by the scientific community due to the following variables. Due to a broader concept of autism (co-diagnosis, more & milder) A recognition of autism among normally intelligent subjects Changes in diagnostic criteria (DSM 1984 – 1994) Included in IDEA in 1991 Improved and earlier identification More professional and public awareness Incorrect understanding and reporting of referral statistics (height example from MIND) Methodological limitations of available data
Changes in Diagnostic and Statistical Manual of Mental Disorders Placed into the DSM-III only in 1980. Modified in the DSM-IV (1994) From six mandatory criteria to 16 optional criteria (only 1/2 needed) Severe phrasing changed to more inclusive phrasing Two diagnostic categories changed to five diagnostic categories including two sub-threshold symptoms (PDDNOS, Asperger’s) only 5/8 symptoms These two account for nearly 75% of current diagnoses
IDEA Inclusion Included in IDEA for first time in 1991 Prior to that, no child count in autism 1991-1992 was an optional year (could count or not) New categories, if viable, increases in usage will occur 1,354% for autism in eight year period 5,5059% for Traumatic Brain Injury 663% in Developmental Delay in three years (1998-2001)
Mercury and Autism Not demonstrated At least eight safety review panels of Institute of Medicine (a branch of National Science Foundation) At least four meta-analyses determined that available data does not support such contention. Over 24 studies across the world have failed to show a link between mercury in vaccines and autism. Confusion with the timeline and illusionary correlation…a cognitive error -- MMR 12 -15 months/ 18-19 months ID Confusion between the impact of methylmercury and ethylmercury The vaccines have not had thimerosal in some countries in 15 years…but still see levels of autism Nothing in US (but some flu vaccines) since 2001
POINT of the CASE STUDY This case study is actually an example in our discipline of bad scientific reasoning and beliefs (even Pseudoscience) that have created false impressions problematic service delivery Inappropriate social policy
Bad Science can even verge on Pseudoscience A body of knowledge, belief, methodology, or practice that claims to be scientific but does not adhere to the tenets of science Does not adhere to the scientific method Lacks supportive evidence Focuses on resemblances rather than cause-effect relations Lacks scientific plausibility Lacks scientific rigor or status.
As a result it often employs: Vague, exaggerated or untestable claims Over-reliance on confirmation rather than refutation Lack of openness to testing by other experts A lack of progress in theory development
A Few Points about Bad science Due to the idea that “ Wishing makes it so ” Bad science is embraced in exact proportion as good science is misunderstood We are dealing with a continuum when dealing with knowledge not a binary choice Another continuum in the form of scientific competence One must strike a balance between the two poles of the continua Absence of critical thinking
Differences Between Good Science and Bad science In Good Science Conclusions are tentative – open for revision Hypotheses are framed so they are capable of being disproved Alternative ideas are confronted by systematic observation and analysis (experimental or qualitative) Skeptical scrutiny is not opposed The opposite is true in bad science or pseudoscience
Differences Between Good Science and Bad science In Good Science Conspiracies are not the reason for lack of advancement There is an attitude of appreciation of human imperfections and fallibility The purveyor works in concert with his/her colleagues (not in almost complete isolation) The opposite is true in bad science or even pseudoscience
Remember that in Science: When possible there must be independent confirmation of the “facts” You encourage substantive debate on the evidence by knowledgeable proponents of all points of view Arguments from authority carry little weight You should desire to spin more hypotheses & explanations If there is a chain or argument, every link in the chain must work (even the premise) One looks to apply Occam’s Razor: when faced with two alternatives, the simpler explanation is usually the best Ideas can be falsified
Remember that in Science: When possible there must be independent confirmation of the “facts” You encourage substantive debate on the evidence by knowledgeable proponents of all points of view Arguments from authority carry little weight You should desire to spin more hypotheses & explanations If there is a chain or argument, every link in the chain must work (even the premise) One looks to apply Occam’s Razor: when faced with two alternatives, the simpler explanation is usually the best Ideas are arranged so that they can be falsified
Boundary Detection Kit How reliable is the source of the claim? Does this source often make similar claims? Habit of going well beyond the facts; more than just an iconoclast Have claims been verified by another reliable source? Not just someone within their own belief circle Who is checking the claims Who is checking the checkers? Outside verification is crucial to good science
Boundary Detection Kit Has anyone, including (especially) the claimant, gone out of the way to disprove the claim, or has only confirmatory evidence been sought? Confirmatory bias (tendency to seek only confirmatory evidence) Science proceeds by honest attempts at falsifiability Look for and at dis-confirmatory evidence How does this fit with what we know about the world and how it works? An extraordinary claim must be placed into a larger context to see how and where it fits.
Boundary Detection Kit In the absence of clearly defined proof, does the preponderance of evidence converge to the claimant’s conclusion, or a different one? Is the claimant employing the accepted rules of reason and the tools of research, or have these been abandoned in favor of others that lead to the desired conclusion? This is where honest scholarship comes in Has the claimant provided a different explanation for the observed phenomena, or is it strictly a process of denying the existing explanation?
Boundary Detection Kit If the claimant has proffered a new explanation, does it account for as many phenomena as the old explanation? Do the claimant’s personal beliefs and biases drive the conclusions, or vice versa? Peer review system
A Tool Kit to Detect difference Ask the Following Questions : Does the claim lack the necessary theoretical characteristics? Theories arise from and supported by systematic observation Theories results in claims specific enough for falsification Theories enable one to create specific testable conditions Theories are couched in tentative language Does the claim lack acceptable support? Defense by authority is not an acceptable support mechanism. Does it sound far-fetched or too good to be true …does it make sense given your knowledge of conventional science … extraordinary claims need extraordinary evidence
A Tool Kit to Detect Difference Does the claim come from a source dedicated to supporting this claim? Science starts with a null hypothesis and searches for evidence. It does not start with a positive hypothesis that is supported with questionable evidence and anecdotal reasoning. Are claimants guarded about evidence and procedures? Science seeks to be open with methods and data to refutation can occur Are the supportive data of suspect or a questionable quality?
A Tool Kit to Detect Difference Typically claims are announced by mass media rather than supportive professional mechanisms Is there a concern by purveyors about suppression by authority Is the claim oriented to, or intended to show there is something wrong with the norm? Any “Yes” answers suggest that you are moving toward the bad science pole of the continuum or even to pseudoscience.
A FURTHER DISCUSSION OF THIS ISSUE FROM A PRACTICAL PERSPECTIVE….. WHY?
A question for Professionals in a Scientific Discipline: Simply Put: Why do people cling to the continual principles and dictates of poor science and of pseudoscience?
When ideas go unexamined and unchallenged for a long time, certain things happen. They become mythological, and they become very, very powerful (E.L. Doctorow)
Why do SLPs or Parents Cling to Poor or Pseudoscientific Concepts, Knowledge Systems or Practices? SOME POSSIBLE REASONS: I. Don’t know there are problems II.Comes down to what they are comfortable with III.They fool themselves IV. They become too personally involved and too defensive V. Don’t care if there are problems
I. They don’t know there are problems w/ the ideas or practices Passive learners Too much faith in what others say Reactive not Proactive “People Contagion not calculation” S. Johnston Poor Theoretical Foundation Don’t keep up with the Research Not as motivated to keep current
II. Comes down to what they are comfortable with What I learned in school (albeit a long time ago) Consumer approach to engagement Their concept/theory/foundation is flawed Behaviorism Reduce complexity Modularity Afraid to step into the breach My personal example from an Istanbul incident “What I know and use”
III. They Fool Themselves Easier to understand why people do this if you regard this “phenomenon” not merely as a set of practices but a sociological belief system Wood, Nezworski, Lilienfeld & Garb, (2003). What’s wrong with Rorschach? (p. 285-300). New York, J. Wiley & Sons.
Traps of Logic and Experience They tend to think that clinical experience is more valuable than the consistency of theoretical arguments and research findings This produces Cognitive Errors
Cognitive Errors: Faith in Authority “My teachers in school taught me” Intellectual subservience to authority Science being used to overcome this (in terms of church authority and ancient –Greek-- philosophy) is what spurred the “Enlightenment” The Genetic Fallacy (evaluating a claim’s truth in terms of its origins.
Cognitive Errors: Social Proof “Generations of SLPs can’t be wrong” Truth by determining that other people say and do “Ad populum” fallacy (must be right because it is so widely held) “Ad antiquitem” fallacy (must be right because it was held for so long) Tendency for Acquiescence (J.S. Damico, 1988)
Cognitive Errors: Clinical Validation & Testimonials “In my clinical experience…” Easily fall prey to cognitive illusions Propensity toward confirmation bias Illusionary correlations The law of large numbers Don’t remember to ask when validating: “As compared with what” Can become fodder for “Social Proof” that leads to acquiescence.
Cognitive Errors: Anecdotal Evidence “ Let me tell you about one of my cases…” Vivid stories are more compelling and memorable…but are illustrations not evidence* Anecdotes may not be typical Lack comparisons or controls They are suitably vague (*In Qualitative research, when cases are employed they are systematic and designed for potential refutation…anecdotes are not so designed)
Cognitive Errors: Vivid Personal Experience “I once had this incredible case…” The Availability Heuristic tendency for information highly accessible in memory to exert on impact on our perceptions Hindsight bias “prediction is difficult, especially about the future” Niels Bohr More likely to remember the “hits” than the “misses”
Cognitive Errors: Cognitive Illusions “I see it at work everyday…” Propensity toward confirmation bias Illusionary correlations Tendency to believe it is repeatedly seen when it is not Overpathologizing illusion Something that appears to identify many patients is seen as highly sensitive rather than invalid.
Cognitive Errors: Reinforcement “I’ve been highly successful with it…” Behavior rewarded is likely to recur. In graduate school supervisors say, “… nice administrations” Relieves pressure of decision making Makes documentation easier Quick and efficient Social reinforcement from like minded
Cognitive Errors: Self-Consistency “It’s impossible I’ve been wrong all these years…” Once we make a choice or take a stand, it becomes personal
Cognitive Errors: Lack of Alternatives “Who cares what the research says…” If they believe there is no alternative, then continue My experience at an Istanbul conference
VI. Too Defensive Will not admit they may be wrong Will not listen to arguments/data More willing to save Face than change Ad Hominem argument too easy
V. Don’t care if there are problems Have a different agenda for advancing the idea or practice rather than efficacy or truth Typically self-aggrandizement Hoffer’s work on “The true believer” is relevant here
The True Believer Eric Hoffer Discussed how psychological and other states give rise to mass movements (large and small) Stated that a passionate obsession w/ external events or movements or with private lives of others is an attempt to compensate for lack of meaning in one’s life. With these personalities (and most people) the mass movements are interchangeable The key is not the movement but that one is a part of a mass movement Appeals to the poor, disenfranchised, megalomaniac, & outliers
The True Believer Eric Hoffer Strive to feel a part by “falling in” with a movement or trend. Conspiracies are the reason for lack of advancement Tendency toward paranoia Considers himself a genius and others not capable Unjustly persecuted and discriminated against Strong compulsion to focus attacks on the greatest thinkers or scientists Science adopts an attitude of appreciation of human imperfections and fallibility. The true believer does not The real purveyor of science and advancement works in concert with his/her colleagues (not in almost complete isolation)