Presentation is loading. Please wait.

Presentation is loading. Please wait.

Jeffery W. Rodamar Workshop on Revisions to the Common Rule In Relation to Behavioral and Social Sciences National Research Council March 21-22, 2013 1.

Similar presentations


Presentation on theme: "Jeffery W. Rodamar Workshop on Revisions to the Common Rule In Relation to Behavioral and Social Sciences National Research Council March 21-22, 2013 1."— Presentation transcript:

1 Jeffery W. Rodamar Workshop on Revisions to the Common Rule In Relation to Behavioral and Social Sciences National Research Council March 21-22, 2013 1

2 Disclaimer This presentation is intended to promote the exchange of ideas among researchers and policy makers. The views expressed in it are those of the author as part of ongoing research and analysis and do not necessarily reflect the position of the US Department of Education or any other Common Rule department or agency. 2

3 Disclaimer (continued) My perspectives: Regulator: “That C.Rule guy”—working every day with regulators, researchers, IRBs, program staff and study subjects. Interagency: Chair, Social and Behavioral Research Working Group; member of HSRIG, CDAC, Committee on Science’s Social Behavioral and Economic Subcommittee; SACHRP Ex Officio; ED liaison with Federal Demonstration Partnership (FDP) and the President’s Commission for the Study of Bioethical Issues. Planning and Evaluation Service, US Department of Education. Legislation: Senior legislative staffer for member of Congress, Education and Labor Committee. Researcher: College faculty member and social and behavioral researcher. Research consumer. Nonprofit policy advocacy group use of research data. Advocate of evidence-based practice: JERHRE board, Campbell Collaboration participant, member of AERA, American Statistical Association and PRIM&R, etc. Long time critic of IRBs and regulations that needlessly hinder rigorous research. 3

4 1. IRB operations 2. Researcher’s views 3. Why it matters 4. Empirically-informed regulation. 4

5 Common Complaints about IRBs Takes too long– the delays harm studies. Costs too much. Flawed review: IRBs Don’t understand the research (e.g. qualitative research). Undermines research with requirements for informed consent, etc. “Don’t talk to the humans”. The “Nanny State”, “Ethical Imperialism”, and “Censorship”, that make it impossible to do vital research. Moral hazard, making research impossible, biasing what research is conducted, driving research abroad. “ And besides, there is no significant risk in social and behavioral research. ” 5

6 Some problems with today’s Common Rule system 1.Science has changed—the Common Rule hasn’t. Multisite, multidisciplinary studies, big data, etc. Modernize. 2. Operational issues: reviews that too often: – Fail to reflect real risks. – Inconsistent (unequal protections, slowing research, …) – Lack coordination among IRBs in multisite studies. – Lack transparency and accountability. When IRBs get it wrong there is often no appeal. 3. Is it working? Largely an “evidence-free zone”. 6

7 from the dawn of modern scientific method: “Nothing is so firmly believed as that which is least known." --Michel de Montaigne (1533 – 1592) Les Essais, (bk. 1, ch. 32) 7

8 “Perhaps the greatest irony of the entire IRB debate is how little of it is informed by actual, reliable research about the facts of the problem… examination reveals that virtually no scientific evidence is brought to bear on any aspect of the debate about how IRBs function.” --Illinois White Paper, 2005, p. 18 8

9 Critic 1: “IRB review creates big burdens for researchers”? Principal investigators on federal grants report that they spend about 42% of their time on “administrative burden”. IRB-related burdens rank highest. of 24 measured burdens. (Federal Demonstration Partnership (FDP) “Burden Survey”) % Caveats: While the online survey had 6,295 respondents, note that: 1.Plenty of room for selection bias: 26% response rate. 2.Precision of measurement : IRB burden measures at 3.5 on a 5 point scale. (3 is “some burden”, 4 is “quite a bit”). In survey research it is common for respondents with no opinion or who are uncertain to select the middle point on a scale. 3. “IRB burden” has a measured burden similar to those for many other grant-related tasks such as applying for grants, hiring and training staff, and providing performance reports to funders. In sum, the “IRB burden” finding is important, but means less than it appears to at first blush. 9

10 Critic 2 : “ IRB review is too costly to researchers”? Really? Data on the cost of IRB review as part of total research costs is scarce. (Most cost analyses focus on costs to institutions.) As first approximation consider: Emanuel et al (2003). In a scenario for a high risk study of an experimental cancer treatment, all activities prior to initiation of human subjects research account for between 1.4% and 2.9% of total research time. (And 2.3%-4.7% of study costs.) Study recruitment is often far more costly, time consuming—and threatening to the ability to conduct research—than IRB review. 10 1.4 % NIH studies; (2.9% industry) Time required for IRB review

11 Critic 3: “IRB review takes too long.” How long does it take? Mean number of calendar days to complete IRB approval One month Two weeks Most IRBs meet monthly, some more frequently, others less frequently. Source: Average of 2010 data for AAHRPP metrics and IRBNet Benchmark report 14.8 23.3 27.6 48.1

12 Critic 3 Continued: “Tail-risks”? Some studies encounter extreme delays--- but why? Journal articles and blogs tell of studies that took much longer to clear—142 days, 252 days, 303 days, 966 days…. Data on review time—from both published articles and performance metrics --suggests that they are “black swans”, rare but important events. Averages (48 days from submission to approval by full IRB) can obscure big problems in the functioning of the Common Rule system. They are important, but not what usually happens—raising the question “Why?” – Little is currently known about these “black swans”. Slow IRBs? Inept researchers? Risky studies? … – In extreme event analysis, the forces at work often differ from the central tendency—in reworking regulations it is important to know what is happening here. 12

13 Not approved? Few studies are not eventually approved by an IRB. AAHRPP Metrics report that 60% of those IRBs had not disapproved a single study in 2011 and about another 20% had disapproved a single study. Fewer than one-in-ten of those IRBs had disapproved more than two studies that year. The average AAHRPP-affiliated IRB had 297 active protocols in 2011. – Disapproved studies are routinely resubmitted. – Abbott and Grady’s systematic review of empirical work on IRBs found a publication that reported that over a 12 year period one IRB rejected no study proposals. 13

14 14 At Yale University research facilitation office the time from intake through development to IRB submission and approval averaged 80 days in 2007. Now the same process takes 45 days. The IRB and research office cut 35 days from that timeline through efficiencies in both the office and IRB processes. University of Maryland School of Medicine: In March of 2010 it took 35 days for full IRB approval for a clinical trial from the U of MD’s School of Medicine which reviews more than 1,000 applications annually. After reorganizing IRB operations, in March 2010 that had dropped to 21 days (and was 13 days in February.) University of Nebraska: “Q: How long does it take to review an application for IRB approval? A. It varies. The length of time depends both on the IRB and the investigator. In 2007, the IRB reviewed 539 protocols. At certain times of the year, a large volume of protocols appear in the reviewing queue. More complex protocols often take more time. So far in 2008, the mean length of time from submission to approval for simpler protocols was 18-24 days, and for more complex protocols, 63 days. Sometimes, the investigator needs to make modifications to the research and to the protocol as a result of review. The faster you make these changes, the less time it will take for approval”. Big time-savings from modest changes in IRB operations.

15 Delay? – A third of the time required for IRB approval is due to researchers’ omissions and errors in providing information needed for review and responding to IRB comments at IRBNet institutions. When delays occur, they are often caused by investigators’ failing to provide necessary information or tardiness in responding to IRB comments. – A major determinant of how long IRB approval takes is how often IRBs meet. (The mode is monthly, sometimes less frequently during holiday season, summer, etc.) 15

16 In sum: 1. Researchers rate IRB review about as burdensome as other basic grant-related activities, such as writing an annual performance report for the grant. 2. IRB review takes roughly 1.4% to 2.9% of total time devoted to the study. This estimate lumps IRB review with all other activities prior to study initiation. 3. The vast majority of studies are approved—commonly on first submission. 4. Most reviews are timely. From protocol submission to approval, expedited review is completed on average in less than 28 days; full-IRB review in about 48 days. (Most IRBs meet monthly. ) 5. Extreme delays are statistically uncommon—and the causes are largely unstudied (e.g. risky studies, lagging researchers, “IRB ping- pong”, ….). 16

17 Critic 4: “Social and behavioral researchers hate IRBs.” Really? 17

18 What do researchers think? (continued) What did you say? Researchers and IRB members are notorious in the research ethics community for their low response rates in surveys on research ethics.* * (In 15 studies of researchers’ attitudes toward and/or experiences with IRBs there was an average response rate of 35%.) 18

19 What do researchers think? The following findings are based on surveys of researchers’ experiences with and attitudes toward research ethics review from: – 25 surveys of human subjects researchers (19 of US researchers, and 6 that include researchers outside the US). – Together these surveys draw on the responses of over 16,500 researchers in widely varied research fields and span more than three and a half decades. 19

20 Some surveys of researchers’ attitudes and experiences with IRBs FDP “Faculty Workload Survey” (2012) web survey, 6295 participants, (26% response rate). Keith-Speigel et al (2006) “What scientists want…” 886 experienced biomed and SBR researchers, (39% response rate) Reeser et al (2008) medical center; 115 respondents, (33% response rate. Wisner et al (2011) Psychiatric researcher 125 respondents (35% response) Liddle & Brazelton (1996) psychology faculty at 10 public research institutions (25% response rate) DeVries & Anderson & Martinson (2006) 1,768 mid-career scientists who received NIH funding (response rates: 52% for midcareer; 43% for early career) Dyrbye et al (2008) medical education researchers 83 (76% response rate) Irvine (2011) online survey of 155 sexuality researchers (40% response rate Pennell & Lepkowski (2009) 542 U Mich researchers (34% response rate) Ferraro et al (1993) views of the IRB at U ND 112 faculty “The Bell Study” (1998) over 2,000 respondents (including 491 IRBs). And … 20

21 While it is often alleged that social and behavioral researchers view IRB review as unnecessary, burdensome and harmful to research…. “Don’t talk to the people” “Nanny state” “Ethical imperialism” And … 21

22 What do researchers really think? The data does not support the conventional wisdom. Researchers report that while there is considerable room for improvement, they report strong support for protection of human subjects principles embodied in the Common Rule, report that the review process tends to protect study subjects and improve studies and report that IRB review is generally timely and appropriate. – In 89% of the 18 studies that ask researchers about their experiences and attitudes toward IRB review the majority of the researchers-- including in the largest and best designed studies--report positive experiences with their IRBs and positive attitudes toward the broader protection of human subjects system. Biomedical and social and behavioral researchers largely agree on this. – None of the studies report large differences between social and behavioral and biomedical researchers in their experiences and attitudes toward IRBs and the protection of human subjects in research. Social and behavioral researchers have often been highly visible critics of the Common Rule system, including arguments that the regulation imposes a biomedical model of science on social and behavioral studies where, arguably, that model does not fit. However, study after study reports little or no difference in how biomedical and social and behavioral researchers view the IRB system, including in levels of satisfaction. 22

23 Researchers’ perspectives: Ideal IRBs --and Our IRB Importance of Elements in IRB performance (on a 7 point scale) “Ideal IRB” “Our IRB” Gap: ideal vs our IRB on a 7 point scale Timely review 6.526.430.10 Conscientious review of submitted protocols 6.535.860.67 Competency in distinguishing exempt from nonexempt research 6.295.480.81 IRB members don’t allow personal biases to affect their evaluations of the protocol 6.446.170.27 No preconceived biases against particular research topics 6.145.450.69 No preconceived biases against particular research techniques 6.175.430.74 Open to innovative approaches to conducting research 6.185.280.90 Does not use its power to suppress research that is otherwise methodologically sound and in compliance with federal policy whenever it perceives criticism outside the scientific community 6.336.080.25 23 Cited from the Keith-Speigel et al. in Reeser et al. (The Ideal/our IRB gap doesn’t exceed 1 pt, averaging half a pt.)

24 24 MedSBR IRB serves a useful purpose88%83% IRB review adds protection9088 IRB is an ally in my research7164 The IRB required changes were reasonable72 IRB interprets regulations too strictly6359 IRB required changes made it harder to achieve goals 3336 IRB required changes improved protections of human subjects 44 The IRB understands its mission8884 Medical and Social and Behavioral Researchers’ Attitudes toward IRB review at the University of Michigan, 2009 Average Med/SBR gap of 3%

25 25 In sum: Researchers generally approve of the IRB system —albeit with desire for faster review, more responsive IRB staff, etc. Biomedical and social and behavioral researchers have very similar views of the IRB system—and both are interested in process improvements. There is no bright line between Med and SBR research, as noted by R. Levine for National Commission.

26 The blurring border of Biomedical and Social-and-Behavioral Research A bio-med/social behavioral regulatory dichotomy is not practicable conceptually or empirically: It has always been blurred (e.g., Dr. Robert Levine, National Commission, studies of stress etc.). Studies commonly include both biomed and SBR elements—both as a study topic and as information collection. Risks are changing with increased scope of data, etc. 26

27 So… “Why must we closely regulate activities that have minimal probability of causing minimal harm, especially when the current regulations add so little protection at significant cost and burden?” --Scott Kim, (2009) “IRB Oversight of Minimal Risk Research: Are we seeing the Big Picture?” presentation 27

28 Say what? “Closely regulate”? Today most SBR does not require any IRB review under the Common Rule because: 1. It is “not human subjects research” as defined by the reg, 2. It is not funded by a covered source, and/or 3. It falls under one or more of the exemptions. For example, surveys are exempt unless the private information collected is both sensitive (affecting employment, etc.) AND identifiable. Many sensitive studies, such as the Belfast Project study of N. Ireland “Time of Troubles” are not reviewed by an IRB. At the University of Michigan only about 4% of SBR studies go to full IRB. For SBR studies that go to the UM IRB, 35% are deemed exempt and another 61% are expedited. --Scott Kim, 2009 28

29 29 Coast-on-by at Colorado’s Coast IRB? Little data on whether IRB matters

30 OHRP Compliance Oversight New Cases Initiated – 1990-2011 Year webinar was February 23, 2012.

31 31 New cases initiated from 2008-11 never exceeded 8 in any one year. New cases peaked at 91 in 2001 (about the time OPRR was eliminated). Since 2001 the annual number of new cases never exceeded 42. The annual number of investigational visits never exceeded 6 from 1990-2001; the annual average was 3.3. Most cases are handled via phone calls, emails and letters. OHRP reviews most commonly focus on informed consent forms, IRB meeting minutes and other procedural issues. Little is known about the effectiveness IRBs in protecting research subjects. From 1990-2011, a total of 12 Assurances were suspended (i.e. an average of about one every other year) and 36 restricted (an average of 3 a year). “When the Feds come knocking”: OHRP compliance reviews and enforcement 1990-2011

32 “The Iceberg” No problems—or no reporting? Is the answer “It’s not my job”? “I can’t tell you whether our two undercover successful tests are isolated cases or the tip of the iceberg. What I can tell you is that given the history of human subjects testing, it’s hard to believe that anybody could be comfortable with the integrity of the current system.” –Gregory Kurtz, GAO Managing Director Forensic Audits and Special Investigations, testimony to the House Subcommittee on Oversight and Investigations 2009. “’What in the devil is going on in your agency that allows you to think you can ignore the law and regulations governing the adequacy of IRBs and simply enter whatever is emailed your way and put the US government stamp of approval on an IRB? Nobody picked up on names like ‘Phake Medical Devices,’ ‘April Phuls’, Timothy Wittless’ and Alan Ruse? Or the town of “Chetesville, Arizona?” --Hon. Greg Walden, R-Oregon, ranking member, Subcommittee on Oversight and investigations.. “Many political scientists—especially those in the senior ranks— are either oblivious to the existence of IRBs or actively decide to sidestep them by not submitting their proposals for review” --Dvora Yanow and Peregrine Schwartz-Shea, PS: Political Science & Politics, PSOnline (July 2008)494. pp 483-494. In ED nearly every report of noncompliance or unanticipated risk comes from parents or teachers familiar with the Common Rule due to medical contacts—or hearing about the “IRB” on TV’s medical dramas. 32

33 It’s likely that SBR is like other fields— with most problems never reported. “Scam artist steal a documented $3 billion a year from [older Americans], a mere fraction of the actual amount considering that only an estimated 4% of retirement age victims—just one in 25 according to various studies—ever report the crime” --AARP Bulletin. Most rapes are never reported. DOJ reports that 36%of rapes and 34% of attempted rapes were reported to police in 1992-2000. Only 1-out-of-7 errors, accidents and other events that harm hospitalized Medicare patients are reported. To be paid under Medicare, hospitals are supposed to track medical errors and adverse patient reactions, analyze their causes and improve care as a result. HHS estimated in 2012 that more than 130,000 Medicare patients experienced one or more adverse events in a single month. Airline controller errors rose dramatically, jumping 50% from 2009-2010 when changes in FAA reporting resulted in more voluntary reports from controllers. The Aviation Safety Action Programs encourages controllers to self-report errors in exchange for immunity from punishment. It has been argued that deaths reported to OHRP seriously underestimate the number of actual research-related deaths (e.g. Shamoo). And so on. 33

34 Myth “IRB review adds no significant protection in SBR.” Most SBR is minimal risk, “thus even when working perfectly, oversight of minimal risk research does not prevent significant harm.”  But some SBR is quite risky—and may arise in particular respondent characteristics or context.  It takes just a few studies gone bad to undercut study participation, confidence in research, funding for research, and use of research. 34

35 Myth “Put IRB resources where the risk is.” Currently there is misallocation of limited IRB resources: “Expedited reviews cost as much as full board IRB reviews.“ (S. Kim, 2009) In fact: – Most SBR is exempt, so those minimum risk studies have little or no impact on IRB resources. – That cost estimate is not generalizable—it is a study of biomedical research at academic medical centers. And there was wide variation in their costs, even after controlling for number of studies reviewed, etc. (Sugarman et al 2005) – Costs and burdens should not be overstated. In those medical centers, IRB review cost the IRB $400 to $600 per study. Expedited studies are usually approved in significantly less time than full board IRB reviews -- presumably costing both IRBs and researchers less when properly accounted for. 35

36 “There are no significant risks in SBR”? 1.Few SBR-related deaths, serious injuries, etc. reported to Common Rule agencies. 2.There is little litigation, or media coverage to suggest that such risks are common. 3.Surveys and interviews with subjects in studies of sensitive topics usually report few or no significant harms, and many express relief in being able to talk about what they’ve gone through. 36

37 PhD student Sudhir Venkatesh had gained the confidence of many in the public housing complex, and had gotten individuals to share with him information on how much they were earning in the drug trade, prostitution, car theft, and various other “economic activities”. The gang leader and an associated housing project leader convinced him to provide information on what information he had gleaned from his interviews. He saw this as an opportunity to check on the credibility of the information that his study participant had provided. Soon after his disclosure, he found he was being cold- shouldered and avoided by the residents. The book describes the interaction when he pressed one resident that he had gathered earnings information from for information on what had happened: 'Don't play with me. All that shit I told you. All them niggers I introduced you to. If you told me you were going to tell JT, they were making that money, I wouldn't have told you nothing.... 'JT is all over these niggers' C-Note said. He looked disgusted and spat on the ground...."He's taxing every one of them now' he added. 'And he beat the shit out of Parnell and his brother because he thought they were hiding what they were doing. They weren't, but you can't convince JT of nothing. When he gets his mind to something, that's it. And then he tells Jo-Jo and his guys that they can't come around no more because they were hiding things from him. Jo-Jo's daughter lives up in here. So now he can't see her.' C-Note kept talking, getting angrier and angrier as he listed all the people that JT was cracking down on. 'There's no way he could've found out if you didn't say nothing.'" Venkatesh, Gang Leader for a Day, p. 203 DRAFT: Discussion purposes only37

38 “I’m just a Fulbright scholar…” In 2008 ABC reported that ‘in apparent violation of US policy, Peace Corps and a Fulbright scholar were asked by a US embassy official in Bolivia to basically spy on Venezuelans and Cubans in the country. The Fulbright scholar John Alexander van Schaick said he was ‘told to provide the names, addresses and activities of any Venezuelan or Cuban doctors or field workers” he encountered while working in Bolivia” Peace Corps Volunteers reported that the same US embassy official had also asked them to gather such intelligence the previous summer. (Project Camelot, Operation Condor redux, or ?)

39 It’s only music. “ Ngawan Chophel a Tibetan ethnomusicologist and former Fulbright scholar, was sentenced to 18 years in prison on trumped up spy charges. China accused Ngawang Choephel of being sent by the ‘Dalai clique with financial support of a certain foreign country,’ an obvious reference to the United States.” He was released years later by the Chinese, just prior to a visit to Beijing by the U.S. President. 39

40 Belfast Project (cf. “It’s only oral history”) 40

41 “Interrogation research” JAMA Commentary Roles of CIA Physicians in Enhanced Interrogation and Torture of Detainees Leonard S. Rubenstein, JD and BG (ret) Stephen N. Xenakis, MD JAMA 2010, 304(5) 569-570.

42 “The Trauma of Truth Telling: Effects of Witnessing in the Rwandan Gacaca Courts on Psychological Health” Karen Brounéus Department of Peace and Conflict Research, Uppsala University, Sweden and National Centre for Peace and Conflict Studies, University of Otago, Dunedin, New Zealand, karen.brouneus@otago.ac.nz ; Journal of Conflict Resolution, 2010.karen.brouneus@otago.ac.nz Abstract Truth telling has come to play a pivotal role in postconflict reconciliation processes around the world. A common claim is that truth telling is healing and will lead to reconciliation. The present study applies recent psychological research to this issue by examining whether witnessing in the gacaca, the Rwandan village tribunals for truth and reconciliation after the 1994 genocide, was beneficial for psychological health. “The results from the multistage, stratified cluster random survey of 1,200 Rwandans demonstrate that Gacaca witnesses suffer from higher levels of depression and PTSD than do nonwitnesses, also when controlling for important predictors of psychological ill health. Furthermore, longer exposure to truth telling has not lowered the levels of psychological ill health, nor has the prevalence of depression and PTSD decreased over time. This study strongly challenges the claim that truth telling is healing and presents a novel understanding of the complexity of truth-telling processes in postconflict peace building.”

43 “Witnesses to Genocide: Experiences of Witnessing the Rwandan Gacaca courts” By Ulrika Funkeson, Emelie Schroder, Jacques Nzabonimpa and Rolf Holmqvist, Peace and Conflict, 2011. Exploratory study, qualitative interviews with 8 women, 3 main findings: – (1) Witnessing can have negative impact on psychological health, – (2) Revealing truth through gacaca can be perceived as emotionally distressing but also relieving, and – (3) Witnessing can cause increased hostility or reconciliation in the relationship between the witness and the perpetrator.

44 Yanomami “The recent appearance in the Brazilian press of two articles on the Yamomami Indians based on Napoleon Chagon’s latest paper on Yanomami ‘violence’ [the article in Science]… has prompted us to call your attention to the extremely serious consequences that such publicity can have for the land rights and survival of the Yanomi in Brazil.” … The Brazilian Anthropological Association (ABA) feels that it is fundamental TO INSIST on the need to bring to the awareness of North American anthropologists the political consequences of the academic images they build about they people they study. The case of the Yanomami in Brazil, who have been suffering a brutal process of land expropriation which is justified in discriminatory images based on dubious scientific conclusions, are in this respect a particularly grave and revealing case….We urge the AAA to take the necessary steps to call to the attention of the North American anthropological community the ethical and moral repercussions of their writings for critical situations such as this” -- Robert Borofsk (ed) Yanomami: The Fierce Controversy and What we Can learning from It, (2005)p. 35 44

45 Is it on the test? Research meets praxis: NCLB etc. High-stakes test-based accountability-- Pass or fail—for students, teachers and schools.

46 Opportunity-to-learn risks in education research “Bills Prod Schools to Hold Back Third Graders” Wall Street Journal, Feb. 13, 2012 A child’s ability to read at the end of third grade is a strong predictor of subsequent performance in school. Legislators in Colorado, New Mexico, Iowa proposed requiring students to repeat 3 rd grade if they do not pass a reading test. When tried in Chicago in the late 1990s a series of studies by the Consortium on Chicago School Research at University of Chicago found that, in general, retained students did no better in later years than students who had been promoted—but were more likely to drop out of school. 46

47 “Brain-enhancing Drugs Stir Safety, Ethics Debates” Education Week, (Oct. 25, 2012) “While less than 3% of secondary students reported ever using a prescription painkiller, tranquilizer or stimulant for fun, 62% reported they had used a prescription stimulant in order to be more focused at school, and 44% said they had used the drugs to be more focused for a job, sports or extracurricular activities. Nearly 13% said they had friends who regularly used prescription stimulants to study and focus at school or work” --National Center on Addition and Substance Abuse, Columbia University, 2009: “Adolescent Substance Abuse: American’s #1 Public Health Problem”

48 “How to build a better learner” --Scientific American (July 2011)

49 The new world of education research With a changing world, risks in education research, including to “opportunity to learn” have changed: Emergence of high-stakes test-based accountability for students, teachers and schools. (“Is it on the test?”) Emergence of “Zero Tolerance” policies for school discipline. Vulnerable population: 20% of children live in households with income below the poverty line. 34% of children are in single parent households; An estimated 5.5 mn children of illegal immigrants (about 4 mn. of those children were born in US and are US citizens), …. And …

50 It’s not just disclosure risks Ethical researchers do not use interventions known to be iatrogenic  “Scared Straight” “a 2002 meta-analysis of relevant research on nine such programs, found that ‘not only does it fail to deter crime, but it actually leads to more offending behavior.’“  D.A.R.E. A meta-analysis of 20 rigorous studies found that the D.A.R.E. program had a less than small effect on reducing drug use (Cohen’s d = 0.05); the school-based drug intervention program also had a small effect on improving psychosocial behavior (Cohen’s d = 0.10). Some evidence suggests that D.A.R.E. may increase risk of substance abuse for some types of students.  Many educational interventions have no measureable effect, or are worse than normal educational practice --and can pose serious risks to student “opportunity-to-learn” the tested curriculum.  And … DRAFT: Discussion purposes only50

51 Characteristics and Context matter: For example, research on the education of women is usually exempt as “normal educational practice”. “Acid attacks, poison: What Afghan girls risk by going to school” --By Allie Torgan, CNN "Every day, you hear that somebody's thrown acid at a girl's face... or they poison their water.“ … There were at least 185 documented attacks on schools and hospitals in Afghanistan last year, according to the United Nations. The majority were attributed to armed groups opposed to girls' education.

52 Big data challenge: checks-and-balances Beyond the “You have no privacy, get over it” world. Cf Scott McNealy, SunMicrosystems SBR often includes directly and/or indirectly identifiable data.

53 If you have three variables… Birthdate, sex, and ZIP code (5 digit)--alone uniquely identify 87% of the US population. -- Sweeney et al. “Confidentiality and Privacy of Electronic Medical Records” Education research often involves small samples, with longitudinal data, of students in classrooms in schools that may be linkable to external data sources to reidentify subjects. And … 53

54 Proliferation of linkable data sets and evolution of data reidentification makes good data practices (and research ethics) increasingly important. “Genetics Genealogy Databases Enable Naming of Anonymous DNA Donors”, Science (18 January 2013) “By applying an algorithm to anonymized genomes from a research database and doing some online sleuthing with popular genealogy Web sites, researchers were able to guess the true identities of DNA donors” * There are an increasing number of linkable data sets to re-ID genetic and other data. For example, the FBI’s rapidly growing CODIS system has DNA SNPs for over 10 million people, and 450,000 unidentified people, readily searched if have access. 54

55 Who cares? When risks of disclosure are included in informed consent, e.g. for record linkage), people are less willing to participate. (Eleanor Singer et al.) A 2012 national survey by Forester Research found that one-in-three consumers were concerned about companies having access to their behavioral data from the internet. More than 40% said they had stopped short of completing a transaction on internet web site because of something they read in a privacy policy. (N.Y.T.) “A new US survey has just shown that nearly half (45%) of US citizens feel that they have little or no control over the personal information companies gather while they are browsing the Web or using online services such as photo sharing, travel or gaming. Twenty-one percent of citizens have even stopped using these online services because of data protection concerns. I believe that policy makers need to provide a solution. It is important both for consumers and for business….” “Seventy-two percent of Europeans have told us in surveys that they are concerned about how companies use their personal data. They are especially worried about online privacy and this is one of the most frequent reasons why people don’t buy goods and services online….” --Viviane Reding, Vice President of the European Commission and Justice Commissioner of the European Union, “Q&A with Viviane Reding”, New York Times, (Feb 2, 2013). 55

56 Idiosyncratic outlier-- or symptomatic of a bigger problem? When “a study goes bad”, the challenge for the public, politicians, and regulators is to determine whether it is a “local” problem involving a particular study, researcher or institution— Or whether it is evidence of a bigger problem that must be corrected with more enforcement, and/or new laws and regulations-- and/or avoiding research participation. 56

57 Critics often misspecify the role of IRBs and the Common Rule 57 Critics emphasize the cost, time, burden, required informed consent procedures involved in IRB review as negatively impacting research. They also emphasize that most SBR is minimal risk.  This leads to a hypothesis that minimizing or eliminating IRB review eliminates those problems. However, in addition to such direct effects, IRB review and the broader Common Rule system builds and protects the generalized and specific trust that research depends on. This includes isolating “studies gone bad” as idiosyncratic local events, not something most people have to worry about in deciding whether to participate in research.

58 “The model”: Trust—and study recruitment 58 Personal and general information on risks affect trust in research—and willingness to participate.  Positive experience build this social capital; negative diminish it (Putnam, …). Sponsorship, neighborhood effects etc. affect trust and participation (Groves, etc.)  Mediating factors: Study design and field procedures matter but are costly and their effects bounded (Groves, Dillman, E. Singer, Wendler et al, etc.).  Increasingly information about the value of research and its risks comes from social networks and media—which tend to emphasize the unusual, the negative. Researchers increasingly lose control of the narrative.  Consequently even a few “studies gone bad” can have important negative impacts on study recruitment (and more broadly on trust in research results, funding, etc.)

59 “The model”: continued Small changes, big impacts 59 Personal and general information on risks affect trust in research—and willingness to participate (including responses to recruitment incentives, etc).  If the risk is seen as outdated, not relevant to a particular study, as an idiosyncratic breakdown, it has little enduring impact.  If the risk is relevant, iconic of broader system problems, it can have serious ongoing impacts on the ability to conduct research.  A growing volume of requests to participate in research and other trends make even marginal impacts on recruitment critical for research.

60 Individual receives opportunity to participate in research? No participation in research Invitation to participate in research Reject participation Informed consent (or waived) No Yes Underpowered or invalid research, or no research Ethical research: fair treatment, minimal or no unanticipated risks Unethical research: unfair treatment, lack of scientific integrity, etc. T+1: No new knowledge T+1: Individual and their networks less likely to consent to future studies. Results in less or no new knowledge T+1: Valid completed study + Individual and networks more likely to participate in future studies. Enables sustainable production of new knowledge Heuristic figure

61 61 IRB review is a moderating variable in study recruitment and retention Relatively few people participate in either SBR or Biomed research—and recruitment rates continue to fall, with outright rejections by contacted participants being a the largest component in the decline (e.g. Brick & Williams) Reflections on reputation and its consequences: From “excused research” to “no excuses research”? In SBR and other research recruitment, trust (generalized and specific) matters, sponsorship matters, dramatic recent events, etc. affect research participation (e.g. R. Groves).

62 You have a survey? Why bother? Secular declines in participation/response rates for survey research Participation of schools selected in the initial sample that agreed to participate in the NCES longitudinal study

63 Biomedical research recruitment “Participant recruitment is considered the most difficult aspect of the research process. Recent estimates indicate that 85% of trials do not conclude on schedule due to low participation accrual, 60% to 80% of clinical trials in the United States do not meet their temporal endpoint because of challenges in recruitment and 30% of trial sites fail to recruit even a single participant.… Despite its importance to successful research efforts, there is a dearth of literature reporting details of the recruitment process,” (Blanton et al). 63

64 “Patient recruitment and retention has become a major bottleneck within drug development” Enrollment rates in medical clinical trails have dropped from 75% in 2000 to 59% in 2006. Retention rates have fallen from 69% to 48% during the same period. 94% of US respondents believe in the importance of clinical research to advance medical knowledge --but only 21% feel they have a basic understanding of why and how clinical research is conducted. 28% of the public claimed to distrust information received from clinical research professionals in 1997—by 2003 the percentage had increased to 75%. Many believe that media portrayal of the industry had an impact. Center Watch claims that more than 80% of clinical trials in 2005 were delayed by at least one month in large part because of a lack of study participation—yet the US clinical research industry spends more than US $ 400 mn each year on recruitment promotions. Recruitment challenges are becoming more important as regulators increasingly demand both larger and longer trials, it will contribute to the importance of recruitment IRB review lends credibility, hopefully enhances recruitment in ways sustainable. 64

65 Participation rates in medical clinical trials appear to be declining. 86% of clinical trials failed to meet enrollment goals in 2001. In 1999 the figure had been 80%. Almost four-out-of-five African Americans and more than half of whites believe that people could be used as "guinea pigs" for research without their consent. Many respondents -- 63% of African Americans and 38% of whites -- believed their doctors often prescribe medication as a way of experimenting on people without their knowledge or consent, and almost half of African Americans and more than a third of whites felt their doctors sometimes exposed them to unnecessary risks when deciding their treatments. "We expected to see differences, but we didn't expect to see such high levels of distrust across the board," said Corbie-Smith, an assistant professor of medicine at the University of North Carolina, Chapel Hill. – Source: “Clinical trials are suffering: Suspicious of medical research, volunteers spurn”, LA Times 65

66 Research fatigue may be one factor: Presser and McCulloch (2011) report that the number of respondents to government surveys alone increased from about 2.6 million in 1984 to 10.2 million in 2004. At the respondent level, the biennial Industry Image Studies reports that “the proportion of respondents who reported participating in at least one survey of any mode in the preceding year grew from 19% in 1978 to 82% in 2003. At the same time, the refusal rate to the survey, which had fluctuated between 42% and 49% from 1978 to 1988 increased from 53% in 1990 to 79% in 2003.” Participation in biomedical research is dependent on a relatively small number of volunteers. Analysis of studies registered with ClinicialTrials.gov finds that recruitment is a challenge for many studies (Califf et al.) Analysis of the National Cancer Institute Clinical Trial Cooperative Group in 2000 through 2002 found a 1.8% enrollment rate among white patients, 1.3% for Hispanics and 1.3% for Blacks. While the number of study participants had grown, the proportion of study participants who were minorities had fallen since 1996-8 period even as their proportion of the total population had grown (Murthy, Krumholz and Gross). 66

67 67 Coast-on-by at Colorado’s Coast IRB? Almost no data on whether IRB matters

68 Recruitment, IRBs and Trust Design and field procedures matter. Much of difference in participation of minorities is due to lack of access (Wendler et al). General distrust doesn’t necessarily determine participation in specific studies (Dillman, E. Singer, etc.). Procedures to boost recruitment face serious limits. For example, while incentive payments boost recruitment, the effects are relatively small (E. Singer & Ye; Pew “high effort” surveys, etc.). In biomedical research, the Cochrane Collaboration reports a mean odds ratio effect size of 1.20 (median 1.09) (Treweek et al.) 68

69 Are there sustained effects of participation in SBR?

70 ON RISKY SURVEYS Ever since the Milgram “obedience study”, follow-up surveys and interviews with study participants in studies of spousal abuse, death of a loved one, experience with cancer etc. report that most respondents say they experienced no significance adverse effects. However, … 70

71 Stanley Milgram pioneered in subject follow-up in SBR— In a survey of participants in the Yale “Obedience” study: Only 1% said they regretted participating – and 84% said they were glad or very glad to have participated Follow-up with 40 subjects judged most likely to suffer consequences from participation by another Yale [psychiatrist] selected by Milgram found no evidence of harm. However, 10% of subjects said they had been extremely upset during the experiment and 50% said they had been somewhat nervous during the experiment. And …

72 Milgram’s follow-up survey of participants in Yale Obedience Study More than one-out-of-three of subjects reported when recontacted after the experiment that the experience bothered them some. 7% of those subjects said it bothered them “quite a bit”. One subject complained to the President of Yale University One subject, a food wholesaler, was so upset by the experiment that he consulted with a lawyer friend about it. Source: Blass pp 44, 124-126. ExperienceDefiant subjects Obedient subjects Total I have been bothered by it quite a bit8%6%7% It has bothered me a little30%29% It has not bothered me at all63%65%64%

73 Risks in social and behavioral research Are too significant to ignore (or to rely on post-hoc patch- up): Can be very serious Impact on public and site trust and participation Create systemic risk to the protection of human subjects system itself. 73

74 Trust and participation cf.“Leverage-Salience Theory of Survey Participation” by R. Groves, E. Singer, A. Corning Contagion: A few studies gone bad can fuel narratives that hinder or block future research. (For example, a 1% increase in the mortgage delinquency rate increases the probability of a strategic default by up to 16.5%. Cf. vaccination fears; posing as public health campaign to kill Bin Laden that puts effort to eradicate polio at risk). Subjects are more willing to participate in studies conducted or sponsored by entities they know and trust (“sponsor effects”, “neighborhood effects”, etc. ) Recruitment strategies emphasize building trust. Some people willing to participate in survey won’t if have to sign informed consent. Learning more of risks of record linkage decreases consent for record linkage (Eleanor Singer). 74

75 Trust is fragile: Rare events can have large impacts on trust and regulation— affecting the ability to conduct research  Human Subjects protections:  USPHS “Tuskegee” and “Guatemala”, MK Ultra, Gelsinger, etc.  Nuclear regulation:  TMI, Chernobyl, Fukushima  Financial markets:  MF Global, LTCM, Lehman, etc.  Public health campaigns:  Dr. Shakeel Afridi: Bin Laden and the global elimination of polio 75

76 76 Flash mobs — the new world of human subjects protection? TSA vs mom with a smartphone camera Disabled child crying in her wheelchair vs Airport security A three year old girl in a wheelchair and her upset parents as security agents take away the crying child’s beloved stuffed doll. The mother records it all on her smart phone. When posted on the internet, the five-and-a-half minute video goes viral, triggering wide media coverage. The Transportation Security Administration apologizes to the family for the “unfortunate incident”, and changed staff training and other procedures. (NYT, March 5, 2013) Confronted by a woman with a Facebook account, big bank backs down. “ In the internet age, the impact of consumer distrust is amplified: anyo9ne can organize and participate in a protest. Participants do not need to gather physically; 60% of US adults use social media such as Facebook—and 66% of those users use it to encourage other people to take action….When a major bank attempted to charge a $5 monthly fee for its debit card, a California woman created a Facebook event, dubbed “Bank Transfer Day” and invited her followers to transfer their accounts from large banks to credit unions that day. In the five weeks leading up to that day there was massive media coverage—and billions of dollars in deposits were reportedly transferred. Faced with the Facebook protest, the bank reversed its policy. —Sarah Bloom Raskin, Board of Governors, Federal Reserve System, Feb 28, 2013.

77 The bell curve and the bell jar — Regulators and IRBs typically focus on this: _X_X

78 Long tails in SBR research— meet blogs, media and flash mobs RISK _ x Public perception: “flash mob, media, etc. The public and media often focus on this:

79 Low probability, high magnitude? DRAFT: Discussion purposes only79 The Common Rule protects all study subjects— Not just the average person in the average study. How should the Common Rule deal with study risks that are infrequently encountered but high impact?

80 “Mean risks” (and the long tail) NOT JUST THE AVERAGE PERSON IN THE AVERAGE STUDY  Assessing the probability of rare and extreme events is an important issue in the management of risks in human subjects in research.  Consideration of “mean” risks is vital (cf. project SERR)—but inadequate. Low probability/high magnitude risks must also be considered --in ways that minimize regulatory burden but that realistically assess risks.  “Extreme value analysis” is routinely used in modern science, including most risk management fields, such as engineering, insurance, and financial management. 80

81 81 FROM “BIG DATA” TO “NO DATA”? Sometimes it just takes one study- gone-bad to change laws and regs Even a few studies gone bad can trigger serious barriers to future research: 1.American jury study (Wichita jury component)  Congress banned recording jury proceedings (1953). 2. USPHS “Tuskegee” and Common Rule (long shadows of Guatemala, MKUltra, etc.) 3.J. Hopkins education surveys  Cited in creation of Family Education Privacy Act (FERPA) regulating student records and PPRA regulating surveys of students. 4.CHEERS (EPA observational study of pesticide exposure in children  new EPA protection regulation (2006) 5.“Regulatory reign of terror” after Jesse Gelsinger, Ellen Roche deaths etc. (VCU survey) (Abolition of Office for Protection from Research Risks (OPRR) and creation of OHRP. ) (2000) 6.GAO sting of Coast IRB etc. led to FDA requiring its IRBs to register (2009) 7.Vaccinations: public trust (autism, Bin Laden, etc.) 8.Studying civil strife and genocide after Belfast Project? 9.Risks in low income housing after Kennedy Krieger? 10.And…

82 A new privacy statute enacted, on average, each 3 ½ years. The Presidential Commission for the Study of Bioethical Issues identified 11 separate privacy statutes enacted by Congress from 1970-2009.

83 In sum The Common Rule created a system of decentralized regulation—so that politicians and bureaucrats inside the Beltway do not determine what scientific research is conducted. Arguably, the Common Rule is a notable success story— including in SBR. Since it was adopted nearly a quarter of a century ago: – The quantity and quality of research have continued to grow. – Few reports of serious problems/unethical research in the US – Unlike many other statutes and regulations, it has been robust— largely stable since 1991. – And the policy model has been adopted around much of the planet. 83

84 84 Some concerns—and the path ahead

85 “The Galileo Problem” “Agenda-setting, self-censorship, …: IRBs’ secret function?” Cf. “Normal science” vs. “paradigm challenging” science (Kuhn) Little systematic data, but enough information to require a closer look, e.g.: Jack Katz (The Seductions of Crime, etc.) Ceci et al: IRBs and sensitive topics Irvine: Sociologists studying sexual behaviors Hyder et al: foreign researchers’ view of politicization in US IRB review M. Levine: “IRB Review as a cooling out mechanism”… And …. Review bias:

86 “Human subjects review, personal values and the regulation of social science research”, In a study by S. Ceci and others, 157 university IRBs reviewed 9 hypothetical proposals that were identical in treatment of human subjects but differed in sociopolitical sensitivity and in ethical concerns (e.g. the presence or absence of deception and debriefing”). All aspects of the proposals were otherwise identical. Sensitive proposals were twice as likely to be rejected by IRBs. Reasons for nonapproval varied. Nonsensitive proposals that did not involve ethical problems were approved 95% of the time vs 40- 50% for comparable sensitive ones. 86 Galileo problem (continued)

87 “IRB Review as a ‘cooling out device”, Council et al. (1999) report that in a survey of researchers conducting more than minimal risk studies of sex, hypnosis, psychotherapy and related areas that while the researchers felt that the reviewing IRB did not always have a firm grasp on the types of research they proposed and that their studies received closer scrutiny than those of colleagues in other fields, the study “findings can be summarized to indicate that investigators who conduct research involving more than minimal risk with department subjects pools are fairly well-satisfied with their IRBs“. Council et al report that for that sample of risky research PIs, 66% were either very or somewhat satisfied with their IRB experiences; 19% were very or somewhat dissatisfied Irvine reported that for 26% of faculty reported that students had trouble getting IRB approval for sexuality studies, such as a student who wanted to use letters written by queer prisoners. Says that the only student in her department to have his proposal rejected by her department was to explore policy don’t ask, don’t tell impact on daily lives of service persons—when disclosure could entail end of their careers in the military. In “Can’t ask, Can’t tell: How Institutional Review Boards Keep Sex in the Closet” Irvine reports the first survey to examine the career experiences of sociologists who study sexuality. The 2011 on-line survey of members of the American Sociological Association’s section on sexualities included 155 respondents, for a response rate approaching 40%. About 45% of the sociologists researching sexualities indicated that they had experienced difficulties in IRB review, and 41% reported that other sexuality researchers at their university had experienced IRB difficulties. The article notes that the problems include factors such as delays in approval, required changes in the consent forms, or affecting the nature of the research itself. 26% indicated that their students had encountered problems in IRB approval of sexuality studies. The studies included a study of homosexual military personnel under the “Don’t Ask, Don’t Tell” regulations, and letters written by homosexual prisoners. Irvine discusses how IRB review can bias research agendas and study design, noting, “Under the guise of protection, IRBs exclude sexual minorities from research and deny them a voice which, as my respondents note, actually reinforces. 87 Galileao problem, continued:

88 Empirically-informed regulation 88

89 “Our regulatory system…must measure and seek to improve the actual results of regulatory requirements….. [E]ach agency shall identify and consider regulatory approaches that reduce burdens and maintain flexibility and freedom of choice for the pubic. –Executive Order 113563 (2011) cited by Cass R. Sunstein, in “Empirically Informed Regulation”. “The current system for making these [regulatory ] choices is broken. It is largely based on faith rather than on evidence”… The result is that our regulatory system all too frequently takes shots in the dark and we all to infrequently fail to find out if we have hit anything—or even worse, we only found out when things have gone horribly wrong.” -- Michael Greenstone (2009)“Toward a Culture of Persistent Regulatory Experimentation and Evaluation”, Tobin Project. A new era in regulatory reform: experimentation and evaluation

90 “Our regulatory system…must measure and seek to improve the actual results of regulatory requirements….. [E]ach agency shall identify and consider regulatory approaches that reduce burdens and maintain flexibility and freedom of choice for the pubic. –Executive Order 113563 (2011) cited by Cass R. Sunstein, in “Empirically Informed Regulation”. “The country needs and, unless I mistake its temper, the country demands bold, persistent experimentation. It is common sense to take a method and try it: If it fails, admit it frankly and try another. But above all, try something” --Franklin D. Roosevelt, 1932 “The Experimenting Society” Don Campbell (1969); Alice Rivlin, Systematic Thinking for Social Action (1971). “The current system for making these [regulatory ] choices is broken. It is largely based on faith rather than on evidence”… The result is that our regulatory system all too frequently takes shots in the dark and we all to infrequently fail to find out if we have hit anything—or even worse, we only found out when things have gone horribly wrong.” -- Michael Greenstone (2009)“Toward a Culture of Persistent Regulatory Experimentation and Evaluation”, Tobin Project. A new era in regulatory reform: experimentation and evaluation

91 Evidence-based practice “But, in making new investments, the emphasis has to be on “smarter”. Many programs were founded on good intentions and supported by compelling anecdotes, but don’t deliver results. … “Rigorous ways to evaluate whether programs are working exist. But too often such evaluations don’t happen. They are typically an afterthought when programs are designed — and once programs have been in place for awhile, evaluating them rigorously becomes difficult from a political economy perspective. “This has to change, and I am trying to put much more emphasis on evidence-based policy decisions here at OMB. Wherever possible, we should design new initiatives to build rigorous data about what works and then act on evidence that emerges — expanding the approaches that work best, fine-tuning the ones that get mixed results, and shutting down those that are failing “ –Peter Orzag, OMB director 91

92 Empirically-Informed Regulation: “Toward a culture of persistent regulatory experimentation and evaluation” “The first step toward a culture of regulatory experimentation and evaluation is to write the statutes governing regulatory programs so that the regulations are implemented in ways that they lend themselves to experimental or quasi-experimental evaluation…. “Clearly it is not sufficient to merely devise regulations in such a way that they are intrinsically testable; we have to fund the evaluations of new regulations. An easy way to achieve this would be for the president to sign an executive order mandating that all new regulations must include provisions for collecting data that allow for evaluations of their effectiveness..…The funds…should be used for evaluations by independent research groups. “ … --Greenstone, (pp. 118-119)

93 Empirically-Informed Regulation: “Toward a culture of persistent regulatory experimentation and evaluation” “ The first step toward a culture of regulatory experimentation and evaluation is to write the statutes governing regulatory programs so that the regulations are implemented in ways that they lend themselves to experimental or quasi-experimental evaluation…. “Clearly it is not sufficient to merely devise regulations in such a way that they are intrinsically testable; we have to fund the evaluations of new regulations. An easy way to achieve this would be for the president to sign an executive order mandating that all new regulations must include provisions for collecting data that allow for evaluations of their effectiveness..…The funds…should be used for evaluations by independent research groups. “ … --Greenstone, (pp. 118-119) “Our regulatory system…must measure and seek to improve the actual results of regulatory requirements….. [E]ach agency shall identify and consider regulatory approaches that reduce burdens and maintain flexibility and freedom of choice for the pubic. –Executive Order 13563 (2011) cited by Cass R. Sunstein, in “Empirically Informed Regulation”.

94 Smarter regulation through retrospective review: Agencies’ and OMB’s predictions of the impact of regulatory changes are at best “an inexact science”, getting it right in perhaps 1 out of 4 regs. OMB/OIRA examined research on costs and benefits of Federal Regulations in 2005 and 2011. The 2005 study presented a metaanalysis based on examination of 47 case studies. It found agencies had a tendency to overestimate both benefits and costs. Herrington (2006) compared predicted and actual benefits and costs of 61 regulations, and found that both benefits and costs were overestimated with roughly equal frequency. – (24 had higher benefits than predicted; 21 had lower, and in 16 the ratio was roughly equal to the predicted.) Source: EOP/Council of Economic Advisers

95 Next? 95

96 “Is the system broken?” Evidence-free policy? “There is a paucity of data documenting the breadth and depth of clinical [or SBR] research in the United States. There are few data on the number and types of research studies being conducted, how many subjects are being enrolled, how many serious and unexpected adverse events occur, and how many participants die from research-related causes. There is also no systematic data that quantify the problems with the current system. “ This general point has been made for over two decades by the National Academy of Sciences, Presidential Commissions, the interagency Social and Behavioral Research Working Group, and others. It is time to move on. 96

97 Performance measures can be built into regulations and monitoring For example: “Using Quality Indictors to Assess Human Research Protection Programs at the Department of Veterans Affairs ” by Min-Fu Tsan, Yen Nguyen and Robert Brooks, IRB:Ethics and Human Research, (Jan-Feb. 2013 ) pp 10-13. Of 2,102 protocols, one study was conducted without IRB approval and research under two protocols was conducted prior to IRB approval. 2.8% of studies were suspended or terminated for cause— 0.8% for human subjects safety concerns. “[N]o similar data for comparison from other institutions are available” 97

98 Achieving Effective Human Subjects Protections And Rigorous Social and Behavioral Research Recommendations to The Human Subjects Research Subcommittee, Committee on Science, National Science and Technology Council By The Social and Behavioral Research Working Group February 8, 2005 98

99 Due diligence in regulatory change 1.I.N.F.E.R.: Research to date is quite limited, 1.1 Studies are often small, may (or may not) involve substantial selection and response biases.  Get better data to inform regulatory change and implementation.  Retrospective monitoring of regulatory impacts. 2.What’s the big deal? Much SRB is already “not research” or exempt— most of the rest is eligible for expedited review. Would lack of a common federal base line make problems of quality, consistency etc. worse?  Consider direct and indirect impacts of regulatory change.  Use data to improve IRB operations: turnaround, consistency, etc. 3. Many entities would still review, even if there were no federal requirement — 3.1 Elimination of C.Rule review would end common baseline, likely increasing barriers to research. 3.2 Ethics review predates the Common Rule at many institutions—and would likely endure its abolition or major changes. 3.3 Many institutions that have “unchecked the box” on the FWA still require IRB review for non-Federally funded studies. 99

100 So What? (continued) Some areas of concern— 4. Careful what you wish for: Elimination of ethics review would likely lead to proliferation of other laws and regs—likely issue specific. This makes things worse. 5. “Politically correct” science: Concern about use to control: normal science vs studies of contentious issues and paradigm changing science.  Need research and monitoring by professional organizations (as well as OHRP). 6. Risk to trust (and ability to conduct research) —even marginal impacts on willingness to fund, participate, provide access to information is damaging. Protecting not just the average person in the average study—as extremes matter.  Vital to assess, manage and minimize risks to trust in any changes. 100

101 It’s not just about protecting the average participant in the average study  What thresholds in social and behavioral research warrant a closer look in order to identify low probability, high impact risks? E.g.  Research conducted in a war zone  Research with a particularly vulnerable population,  And … DRAFT: Discussion purposes only101

102 No one pretends that IRBs are perfect or all-wise. Indeed, it has been said that IRBs are the worst form of governing research except for all those other forms that have been tried from time to time. --apologies to Goethe and Churchill. 102 IN SUM: CAREFUL WHAT YOU WISH FOR.

103 Discussion Questions? Comments? Suggestions? 103

104 Thanks! We look forward to working with you for effective protection of human subjects, rigorous research, and better policy and practice. Contact: Jeffery Rodamar, Protection of Human Subjects Coordinator US Dept. of Education, Tel: 202-245-8090; E-mail: jeffery.rodamar@ed.govjeffery.rodamar@ed.gov 104

105 Additional References This presentation is based in large part on three papers that are in preparation by the author: “IRBs: Tales and Long Tails” (on IRB operations) “What Do Researchers Think About IRBs? Surveys of Researchers’ Attitudes Toward IRB Review” “Risks real but rare: Low probability, high magnitude risks in ethics review of proposed research” 105


Download ppt "Jeffery W. Rodamar Workshop on Revisions to the Common Rule In Relation to Behavioral and Social Sciences National Research Council March 21-22, 2013 1."

Similar presentations


Ads by Google