Presentation on theme: "The Innovation Diffusion and Adoption Research Project (IDARP)"— Presentation transcript:
1The Innovation Diffusion and Adoption Research Project (IDARP) Funded by the ODMH & the Mac Arthur FoundationPhyllis C. Panzano, Ph.D. , PI Dee Roth, M.A., Co-PIBev Seffrin, Ph.D, Senior Consultant Dushka Crane-Ross, Ph.D., Project ManagerDecision Support Services, Inc Ohio Dept of Mental Health, OPERODMH RESEARCH RESULTS BRIEFING 2003
2Ohio’s Quality Agenda Best Practices QI Outcomes This is a little “you are here’” marker – like one of those things you seen in the directory when you walk into the mall.This triangle representts the core of the Department’s Quality Agenda,k which Director Hogan announced in January 2000.Part of our Quality Agenda has been the idea of: reducing the detailed process regulations we’ve had in our certification standards [like how many signatures need to be on what pages of a treatment plan] … and instead increasing these 3 things:The use of evidence based practices,Doing rigorous Quality Improvement instead of old-style quality assurance, andSystematically measuring consumer outcomes.We have different strategies for each side of the triangle, and the major one for the left side has been our Coordinating Centers of Excellence, or CCOEs, that have been promoting evidence-based practices around the State.We initially had some hard choices about what these CCOEs would do – what practices they would promote.There were three factors that drove the choices we made:First, we very much want to see services delivered that have a high degree of scientific rigor behind them – so, level of evidence was important.At the same time, there are issues and populations of very high salience for a public mental health system:Criminal justice diversionThe SA/MI population, orEmotionally disturbed kids in schools (especially if the Governor happens to have a really high priority on kids and school success)Outcomes
3Evidence Base Salience Those two dimensions – scientific rigor and program salience – can become the x and y axes on a graph, like this.There was third sort of intervening variable that you can’t see here – and that was what kind of resources we had in the state that had both the expertise and the interest to take on the challenge of being a CCOE. For some of our interests there were matches, and for some others there were not.Salience
4Coordinating Centers of Excellence (CCOEs) SAMI-IDDTMSTFamily PsychoeducationCluster-Based PlanningEvidence BaseOMAPMH/Criminal JusticeWhat we ended up with, from those three variables, at the beginning of this endeavor, was 8 CCOEs:SA/MI -- integrated dual diagnosis treatmentFamily psycho-educationMulti-systemic therapyCluster-based planningOMAP – the Ohio Medication Algorithm ProjectMH/Criminal Justice jail diversion alternativesMental Health in schools, especially alternative programs for disturbed kids, andConsumer Advance DirectivesThis slide reflects where we were with regard to CCOEs when this research started –There have been a couple of changes since, but at that time we had 8, and they were doing the things you see here.For folks in the audience who might not know, I want to tell you a little about what a CCOE is and what it does because the CCOEs are an important independent variable in our research.MH/SchoolsAdvance DirectivesSalience
5Structure of CCOEs University or local partnership One Best Practice per CCOEStatewide service areaEach CCOE is based in a university or organization or a partnership between the two.Each CCOE concentrates on one EBP, and they are a resource to the whole state system about their particular practice.
6Role of CCOEs Promotion of Best Practices Education & training Capacity developmentFidelity measurementCross-system sharingThey promote the EBP –do education and trainingHelp local mental health systems develop capacity to do the practiceThey do fidelity measurement with practices that have established fidelity scales, -- and –They promote cross system sharing about the practice so people can help each otherIt’s sort of a cooperative extension service model – a resource out there to help people do somethingSo the Department started out with these CCOEs, and we hoped this strategy would increase the use of best practices in the Ohio Mental Health System, but we also put in place an endeavor to see how successful it was and to find out more about the processWe put together a complex piece of research to try to track, over time, the extent to which EBPS are 1) successfully adopted, and 2) what variables predict that success
7Research QuestionWhat factors and processes influence the adoption, assimilation, and impact of evidence-based practices by mental health provider organizations?Are overarching research question is; “What factors and processes influence the 1) adoption, 2) assimilation and 3) impacct of EBPs by mental health provider organizations?”In other words, what explains the organization’s decision to adopt or not adopt a particular practice, and..What explains whether the adoption really lasts over the long term and is successful?
8Independent Variables Characteristics of the Best PracticeAdoption Decision & Implementation ProcessAdopting OrganizationAdopting Organization – CCOE RelationshipWe’re primarily looking at 4 things:The characteristics of the EBP itself,The characteristics of the adoption decision and the implementation process;The characteristics of the adopting organization;And, the characteristics of the working relationships between the adopting organization and the CCOE.This is a different kind of research for us – probably for a lot of people in this room. We’re used to the client as the focus of the research, and we know all those measures….But, here we’ve got the organization as the focus on the research, because it’s really the organization that makes the critical decisions abut whether, and to what extent, it and its staff will adopt a particular EBP.Since these are organizational level issues, we went looking for researchers with a deep familiarity with the very rich literature on organizational behavior and change, particularly around the adoption of innovation.So in addition to my (ODMH/OPER) staff, our research team includes Phyllis Panzano and her staff. Phyllis is an industrial/ organizational psychologist who has her own research firm.It also includes some folks from the Ohio State University College of Business.The research is a partnership of all three organizations.
9Research Team Ohio State University: Fisher College of Business Decision Support Services, Inc.Phyllis Panzano, Ph.DBeverly Seffrin, Ph.D.Sheri Chaney, M.A.Vandana Vadyanathan, M.A.Sheau-yuen Yeo, M.A.Ohio Dept of Mental HealthDee Roth, M.A.Dushka Crane-Ross, Ph.D.Rick Massatti, M.A.Carol Carstens, Ph.D.So in addition to my (ODMH/OPER) staff, our research team includes Phyllis Panzano and her staff. Phyllis is an industrial/ organizational psychologist who has her own research firm.It also includes some folks from the Ohio State University College of Business and the Dept of Psychology.The research is a partnership of all three organizationsI’m going to turn this over to Phyllis [identify her] to talk to you about the literatures and our research model.Then Dushka Crane-Ross, our Project Manager [identify her] will talk about our methodology and our sample.Phyllis, Dushka and Bev Seffrin, one of Phyllis’ staff [identify her] will report our findings.Then I’ll come back at the end to sum up about what we think we have learned, and how we can use that knowledge.And, we’ve left some time for questions and discussion at the end.Ohio State University: Fisher College of BusinessDepartment of Psychology
10Theoretical Background Numerous literatures are relevantResulting Assumptions:EBPs are innovationsScientific evidence necessary but not sufficientUpper Echelon Theory relevantImplementation effectiveness Innovation effectivenessFactors at many “levels” impact outcomes3 phases: initiation; decision; implementationNUMEROUS LITERATURES RELEVANTIn addition to the literatures that are specifically germane to our evidence-based practices (EBPs) of interest………IDARP lies at the crossroads of several streams of research. … To mention only a few, these include:The literature on the diffusion, adoption and implementation of innovationsThe research related to organizational change…The decision making literature with a particular focus on research related to:Strategic Decision Making andDecisions made under conditions of risk… and finally…The healthcare planning and implementation literature.These literatures have shaped our research design, methods and models to be tested. They have also led to the major assumptions of our research, some of which, include:Evidence – based practices are innovations --- This is because EBPs are likely to be perceived as “new” by those organizations considering their adoptionScientific evidence is necessary but not sufficient for organizations to decide to adopt EBPsUpper echelon theory is relevant as the perceptions and attitudes of top managers are expected to explain organizational decisions and actionsImplementation effectiveness (or the extent to which EBPs are implemented according to prescriptions) ---effects innovation effectiveness (or the extent to which desired outcomes are achieved)Factors at multiple “levels” spanning from features of the EBP itself to environmental conditions are expected to impact both implementation effectiveness and innovation effectiveness.There are 3 key phases to consider in order to understand outcomes of the innovation adoption process:The initiation phase which includes activities and processes leading up to the decision to adopt a new practicesThe adoption decision phase which focuses on the adoption decision itself, andThe implementation phase which follows the decision, includes planning and early start-up activities as well as the actual implementation of the EBPS
11idarp 100-PIECE JIGSAW PUZZLE COMPLEXITY COMPLEXITYidarpTHE DIVERSE LITERATURES AND THE MANY ASSUMPTIONS THAT UNDERLY IDARP HINT AT THE COMPLEXITY OF THE PROJECT.IN FACT, IN TERMS OF COMPLEXITY, WE SEE IDARP AS FALLING SOMEWHERE BETWEEN SOLVING A 100-PIECE JIGSAW PUZZLE AND DECODING THE HUMAN GENOME!100-PIECE JIGSAW PUZZLE
12IDARP ModelsSO, LET’S TAKE A MINUTE TO REVIEW THE FOUR MODELS THAT GUIDE THE PROJECT.THESE MODELS ARE IMPORTANT FOR A NUMBER OF REASONS BUT MOST IMPORTANTLY, BECAUSETHEY PROVIDE ROADMAPS TO THE IMPORTANT QUESTIONS TO ASK IN THIS RESEARCH AND GIVE US DIRECTION ABOUTTHE APPROACHES TO USE WHEN ASKING THOSE QUESTIONS.
13Model 1: Adoption Decision – Decision making under risk The first IDARP model focuses on the decision to adopt (or not to adopt) and innovation such as an EBPThe innovation adoption decision is the most widely studied phase of the innovation adoption process. In fact, the vast body of research related to the adoption decision identifies a host --- or more accurately, a laundry-list -- of factors that have been found to be linked to the decision to adopt innovations.While this research has had a major impact on our project, it has often been criticized for being atheoretical – that is, for lacking a basis in theory. We are proposing a theory base for organizing these findings.We are conceptualizing the adoption decision as an organizationally important decision involving risk. That is, the decision is a strategic decision and it is a risky decision.As a result, the strategic decision making literature and the risky decision making literature provide us with a theory-based framework for thinking about the decision to adopt innovations such as EBPS and for determining which of the multitude of potential factors to focus on in trying to understand this decision.
14Phase 1: Decision Under Risk LIKELIHOOD OFIMPLEMENTINGPerceivedRisk ofAdoptingANTECEDENTS-IMPLEMENTADOPTERWAIT & SEENEVERMoreLikely+Capacity toManage orAbsorb RiskSO WE HAVE CONCEPTUALIZED THE DECISION TO ADOPT AN innovative practice SUCH AS AN EBP AS AN ORGANIZATIONALLY IMPORTANT RISKY DECISION. Following this logic, organizations that decide to implement are explected to see the decision as involving less risk than organizations choosing not to adopt.Three broad factors are expected to explain the an organizations likelihood to decide to adopt innovations. · The first is the extent of perceived risk. O the extent to which the perceived risk of adopting an innovative practice is seen as high is expected to be negatively related to the likelihood of adoption · The second factor is the organization’s capacity to manage risk. O the extent to which resources are seen as available for handling or managing risks that might arise during the process of implementation is expected to be positively related to the decision to adopt an innovative practice· The third factor is the organization’s risk taking propensity. O the extent to which the organization has had a tendency to take risks in the past is expected to be positively related to the decision to adopt an innovative practice.We are directly measuring these three factors in our research. However, as you will later see, we also are gathering information related to a host of antecedents that are expected to be linked to these factors.Bev – i do not talk about specific antecedents here – instead, i later make the point (in the section on findings related to antecedents to the three big factors) that the value of examining antecedents is they can suggest strategies or action levers for impacting the three big factors noted above: risk perception, assessments about capacity, and risk propensity.Less Likely+Risk-takingPropensity
15Model 2: Multi-level Influences on Implementation Success THE SECOND IDARP MODELOR RECOGNIZES AN IMPORTANT IDEA RELATED TO THE INNOVATION ADOPTION PROCESS:The important idea is this:MANY DIFFERENT TYPES OF FACTORS AT MANY DIFFERENT LEVELS OF ANALYSIS HAVE AN IMPACT ON THE SUCCESSOF EFFORTS TO IMPLEMENT INNOVATIONS.
16Interested in Two Classes of Outcomes Measures of Implementation effectiveness:Accurate, committed and consistent use of practiceby targeted employees (assimilation, fidelity, etc.)Measures of Innovation effectiveness: Benefits that accrue to an organization and its stakeholders as a result of implementing an innovative practice (positive consequences for clients, staff, etc.)When we refer to implementation success, we have two types or classes of outcomes in mind.First, we are thinking about measures of implementation effectiveness. Implementation effectiveness is defined as the extent to which an innovation (which in our research means an ebp) is implemented accurately and consistently by committed, targeted employees. Fidelity is one commonly noted indicator of implementation effectiveness.Second, when we talk about implementation success we also are thinking about measures of innovation effectiveness. Indicators of innovation effectiveness are measures that reflect benefits that accrue to an organization and its stakeholders as a result of implementing innovative practices --- positive consequences for consumers is one broad category of a measures of innovation effectiveness.
17Expected Link Between Two Classes of Outcomes INNOVATION EFFECTIVENESSIMPLEMENTATION EFFECTIVENESSALTHOUGH THE LINK BETWEEN THESE TWO CLASSES OF SUCCESS OUTCOMES IS TYPICALLY ASSUMED RATHER THAN DIRECTLY MEASURED,THE LITERATURE SUGGESTS THAT IMPLEMENTATION EFFECTIVENESS HAS AN IMPACT ON INNOVATION EFFECTIVENESS..
18For example: INNOVATION EFFECTIVENESS IMPLEMENTATION EFFECTIVENESS POSITIVEOUTCOMESFOR EXAMPLE, THE EXTENT TO WHICH IMPLEMENTATION IS CARRIED OUT WITH FIDELITY (OR AS PRESCRIBED BY THE EXPERTS), WHICH IS A MEASURE OF IMPLEMENTATION EFFECTIVENESS,IS EXPECTED TO CONTRIBUTE TO THE ACHIEVEMENT OF POSITIVE OUTCOMES FOR CONSUMERS, WHICH ARE MEASURES OF INNOVATION EFFECTIVENESS.WE ARE VERY INTERESTED IN THE LINK BETWEEN THESE TWO CLASSES OF IMPLEMENTATION SUCCESS MEASURES.FIDELITY
19Variables at multiple levels are expected to impact these two classes of outcomes AND, AS SUGGESTED EARLIER, OUR SECOND MODEL PREDICTS THAT VARIABLES AT MULTIPLE LEVELS OF ANALYSIS ARE EXPECTED TO EFFECT THESE TWO CLASSES OF IMPLEMENTATION SUCCESS MEASURES.
20Examples of Variables by Level ENVIRONMENTSystem and professional normsIOR (Org with CCOE)Quality of communicationORGANIZATIONLearning culturePROJECTRe: OrganizationRe: DecisionRe: ImplementationAvailability of dedicated resourcesCommitment to decision to adoptAccess to technical assistanceINNOVATIONScientific supportExperiential evidenceFor example,System and professional norms are environmental level variables expected to impact the ‘success of implementation efforts.The quality of communication between adopting organizations and their ccoe (dee will have introduced this term earlier) is an inter-organizational level variable which likely implications for implementation success.3. Factors at the level of the adopting organization also are expected to be important. For example, the extent to which the organization has a learning culture that encourages employees to try new things without fear of reprisal if they don’t work out iis expected to have an impact on the success of efforts to implement innovations.4. Variables, specifically tied to the implementation of the innovation itself are particularly important.We call these variables “project level’ variables. They are different from general organizational characteristics such as organizational size or culture. Instead, project level variables are directly connected to the implementation effort itself and involve variables such as:The extent to which dedicated or earmarked resources are available to support implementationThe extent to which the organization is committed to the decision to adopt the particular innovationAnd,C. The extent to which needed technical assistance is available to employees responsible for implementing the project.Finally, numerous variables or characteristics of the innovation or ebp itself are expected to impact implementation success. For example, the extent to which the ebp is supported by scientific evidence and the complexity of the ebp.
21Level 4: Inter-organizational Model 2Level 5: EnvironmentLevel 4: Inter-organizationalLevel 3: Adopting organizationLevel 2: Project levelLevel 1: Innovation levelThus – our second model or roadmap does two things:First, it defines implementation success as consisting of two related elements: implementation effectiveness and innovation effectiveness.Second, it identifies variables spanning multiple levels that are likely to explain the success of implementation efforts.Dependent Variables:Implementation effectivenessInnovation effectiveness
22Cross-Phase Effects on Implementation Outcomes Model 3:Cross-Phase Effects on Implementation OutcomesAnother important idea underlying this research is reflected in our third model which we’ve labeled:Cross-Phase Effects on Implementation Outcomes.The BIG idea of this model is this: Factors and processes that occur early on in the innovation adoption process can have enduring effects on the LATER PHASES AND outcomes of THE INNOVATION ADOPTION PROCESS.
23Model 3: Cross-phase effects DecisionOutcomesINITIATIONIMPLEMENTATIONWhat do we mean by early or later in the innovation adoption process in organizations? --- as depicted in this slide--- .Rogers and other experts in the field have identified three key phases of the process that unfold over time. We are assuming that aspects of each phase have a bearing on the outcomes of implementationInitiation is the first stage of the innovation adoption process. It begins with an awareness of a need, problem or opportunity facing the organization that warrants action. This awareness stimulates a search for solutions which may include innovations such as EBP’s. Potential solutions then are evaluated in terms of the extent to which they are likely to suit the particular need or needs facing the organization.The initiation phase culminates with a decision made by an organization at a particular point in time about whether or not to adopt a particular innovation. This decision is likely to take the interests of some or many of the organizations stakeholders into account and can be arrived at in many different ways. If the decision is “no”, the process halts, at least for the time being. It may resume at a later point in time as circumstances and/or information change. If the decision is “yes”, the process proceeds to the next phase: Implementation.Implementation occurs after the decision to adopt an innovation has been made. The early part of the implementation phase is likely to involve working out details of the plan for getting the practice up-and-running, securing additional needed resources, and engaging in startup activities such as hiring or training staff. When these activities are completed, the practice can then be put into actual use. With experience and time, it is conceivable that the practice may become part of the ongoing organizational routine.Model 3, the cross-phase effects model, suggests that features of each of these three phases can impact the success of the implementation process.Time
24Model 3: Examples of Cross-phase Effects Positive ConsequencesExperiential EvidenceObjective ProcessAccess to Technical AssistanceInitiationFor example ---During initiation, a wide range of information is considered in arriving at potential solutions for meeting identified needs and goals. The extent to which a potential solution is backed by real-world evidence of its effectiveness (I.e., experiential evidence) -- is one dimension upon which potential solutions can vary and which can account for non-trivial differences in implementation outcomes such as positive consequences for consumers.Similarly, aspects of the process by which the decision to adopt is made are likely to influence the success of the implementation. For example, the extent to which the process can be characterized as an objective decision process is expected to account for significant variability in the success of implementation efforts.Finally, characteristics of actual implementation are expected to explain significant variability in the success of implementation efforts. For instance, the extent to which implementation team members have access to needed technical assistance is certainly likely to make a difference.Thus, our third model suggests that in order to fully explain the success of implementation efforts, you need not only to consider what is happening during the implementation phase, but also need to explore factors and processes linked to earlier phases of the innovation adoption process.DecisionImplementationTime
25Effects of Implementation Variables on Outcomes Over Time Model 4:Effects of Implementation Variables on Outcomes Over TimeOur fourth and final model or roadmap is labeled: Effects of Implementation Variables on Outcomes Over Time.
26Model 4: Effects of Implementation Variables Over Time PASTImplementationPRESENTImplementationPRESENT OUTCOMESThis model incorporates a two major messages:First, past implementation policies may or may not explain what is currently being seen with regard to implementation outcomesSome past(but discontinued) implementation policies and practices such as initial staff training may continue to have effects on current/present outcomesOther past (but discontinued) implementation policies and practices such as praising staff for their efforts may no longer have a bearing on implementation outcomesSecond, what is likely to matter most is present implementation policiies and practices. That is, what the organization is doing now with regard to implementing supportive policies is likely to have the greatest effect on present implementation outcomes.TIME
27Model 4: Examples of Effects of Implementation Variables Over Time PASTAccess to Technical AssistancePRESENTDedicated ResourcesPRESENT OUTCOMESFor example, as shown in this slide:If staff who have the responsibility of implementing an EBP had access to technical experience in the past, that access is likely to explain some of what is being seen today with regard to implementation outcomes (I.e., present outcomes) but not as much as is explained by the extent to which Dedicated resources are presently being devoted to implementation. Thus, it is the organizations present implementation – related policies and practices that are most important when it comes to understanding the outcomes that are presently being experienced.*************In summary, IDARP rests on four models:The first IDARP model focuses on the adoption decision and conceptualizes that decision as an organizationally important decision involving risk.The second IDARP model focuses on factors accounting for the success of implementation efforts. This model identifies two classes of success measures (measures of implementation effectiveness and measures of innovation effectiveness) and predicts that factors at many levels spanning from the environment in which the organization operates to features of the innovation itself account for implementation success.The third model communicates the idea that factors and processes linked to all three phases of the innovation adoption process (I.e., initiation, the decision itself, and implementation) can explain important variability in the success of implementation efforts.And finally,4. The fourth model emphasizes the notion that although past implementation policies and practices may continue to explain variability in present outcomes of implementation, it is present implementation-related policies and practices that are likely to be most important for understanding outcomes of implementationTIME
29Four CCOEs Participating Selection criteria maximize generalizabilityCluster-Based Planning AllianceMulti-systemic Therapy (CIP)Ohio Medication Algorithm ProjectIntegrated Dual Disorder Treatment (IDDT) –New Hampshire - Dartmouth modelWe made a big effort to pick innovative practices that were diverse, so that we could maximize the generalizability of our findings.-We considered characteristics of innovation, as well as the implementation process.For example: Some require changes in one or two procedures, while others require more complex changes in org structure and position descriptions. Some require lots of coordination internally and with external entities (like school or SA service systems), others do not. These are the final four that we ended up with: 1. The Cluster-Based Planning Alliance CCOE directed by Bill Rubin at Synthesis and the Ohio Council of Behavioral Healthcare Organizations. This CCOE is responsible for promoting the use of a research-based consumer classification scheme to guide staff training, consumer outcomes management, and treatment and service planning within mental health organizations.2. Multi-systemic therapy, which is being disseminated through the CCOE called “The Center for Innovative Practices in Youth and Family Mental Health.” They provide supervision, TA, and fidelity assessments to agencies adopting the Multi-Systemic Therapy model developed by Scott Henggeler and colleagues at the University of South Carolina in Charleston 3.The Ohio Medication Algorithm Project. This practice involves the adoption of medication algorithms, which are used to guild psychiatric medication decisions. The algorithms originated out to the Texas Medication Algorithm Project. The CCOE provides training and TA.4. IDDT model – disseminated through the SAMI CCOE. The CCOE provides, TA, training, consultation and fidelity assessments to organizations adopting the IDDT/New Hampshire-Dartmouth Model.
30Research Design Longitudinal study Organizations at different stages of adoptionMultiple key informants at each organizationQuantitative and qualitative dataInterviews, surveys & archival dataIDARP is a longitudinal study.We are gathering data at multiple measurement intervals so that we can track the adoption and implementation process as it occurs in organizations.We are working with organizations at different adoption and implementation states. Not all of the organizations have implemented or even adopted the EBP being considered. But all have had at least some contact with the CCOE (heard the pitch) and considered adopting.At each participating organization, we are collecting data from multiple informants. These include people involved in the decision to adopt the practice and some people involved in the implementation. We also collect data from the CCOE about the organizations that they have had contact with.Gathering both quantitative and qualitative data. And we are using multiple data-gathering approaches including interviews, surveys and archival data.
31Participating Projects* by Type of Innovation This slide represents the number of participating organizations broken down by type of innovation. You can see that we have a fairly good representation from each EBP.The large number of agencies adopting IDDT model is due in part to start-up grants that were provided to 13 sites in 1999 to adopt the IDDT model and in part to Ohio’s involvement in the CMHS-funded Toolkit Project at the New Hampshire/Dartmouth Psychiatric Center.ClusterAllianceIDDT/SAMIMSTOMAP*18 organizations involved in multiple projects; Total of 74 organizations with 91 projects under study.
32Participating Projects by Stage of Adoption at Time One As I mentioned earlier, we are collecting data from organizations in a number of different adoption or implementation levels or states.•You can see from this slide that we have five adoption categories.•The first two are the non-adopters. The “non-adopter/never” group included those organizations that indicated that they decided not to adopt the innovation and they have no intention of reconsidering in the future. The “non-adopter/wait-n-see” group include folks who indicated that they are in the midst of considering whether to adopt or they may have put the decision “on hold” and will wait to see if conditions change and adoption becomes feasible.•The adopter group include those individuals who have just decided to adopt and are beginning to make plans for implementation. The implementer group include those who are in the process of getting the practice up and running and those who have fully implemented. You can see that this is the largest group so far. We are hoping to increase our numbers in the other groups, so that we can have a better representation of these views.•The final group are organizations that have discontinued use of the EBP. We are calling this group the de-adopters. At this point we have interviewed people from two different de-adopter agencies, but have three more scheduled in the next several months. Those interviews have been really interesting to do, and even though the number may be small, the richness of the qualitative data from these interviews is likely to provide a lot of information about barriers to implementation.NeverWait & seeAdopterImplementerDe-adopterN = 91
33Participating Projects by Stage of Adoption at Time Two This represents the number of projects by stage of T2.Can see that most of the people we have contacted at T2 are implementers. As a rule, we only followed-up with sites that were in the wait-and-see stage, adoption stage or implementer stage at T1, We are still collecting T2 data, these numbers may change, but for the most part, T2 data is from sites in the implementation stage and this is by design.NeverAdopterWait & SeeImplementerDe-adopterN = 50
34Key Informants by Level at Time One This next slide represents the number of Surveys and Interviews by respondents at different levels of each organization. For adopting organizations, we typically collect data from 3 to 5 key informants. At nonadopting organizations we typically only collect data from 2 or 3 people.•We try to obtain information from people at different levels.•Community Collaboratives – for practices such as MST and to some extent SAMI, where adopting organizations have to collaborate with boards or other community systems in order to adopt the practice, we try to collect data from key people in these other systems who have played a part in the decision to adopt the practice. We call these the Community Collaboratives. •Decision-makers include people like executive directors, medical directors, and clinical supervisors. •Implementers include the front-line staff who are directly involved in implementation the practice. For instance, in the case of Clustering the implementers are the case managers and case manager supervisors who are learning to group clients by cluster. In the case of SAMI and MST, these may be the team leaders and for OMAP these may be the psychiatrists who are utilizing the medication algorithms. •We try to get at least one CFO from each agency so that we can get some general information about the environment and organizational structure. •We also ask the CCOEs to complete a questionnaire about each organization that participates in the study. •Eventually we plan to look for closely at the level of agreement between these different informants.Response Rate: Interviewee Surveys – 88%, CFO Surveys – 87%, CCOE – 95%CCOECommunity CollaborativeDecision makerCFO/QAImplementerN = 369
35Key Informants by Level at Time Two More implementers, fewer decision makers and CCs -- reasons behind the changeTypically had between three and four informants per site,Tried to get at least four informants per site -- typically DM, imps, CFO, CCOE. For sites that had not yet begun to implement or were nonadopters, typically only had three informants – DM, CFO, CCOEAt T2 typically had three – no CFO fewer DMs, more implementers, still collecting T2 data from CCOEsCCOECommunity CollaborativeDecision makerImplementerN = 135
36Findings 6 $ 37,500 22 agree Strongly disagree Very satisfied NOW WE WILL BEGIN LOOKING AT FINDINGS AS THEY RELATE TO THE FOUR MODELS OR ROADMAPS PRESENTED EARLIER.22agreeVery satisfied
37Do the data support our four models? THE GENERAL QUESTION WE ARE CONSIDERING IS: DO THE DATA SUPPORT OUR FOUR MODELS?
38THE TIP….OF THE TIPIt it important to note that the findings presented today only represent the tip of the glacier with regard to findings.As you know, this project is a work in progress and we are continuing to gather data as we speak.In addition, we are examining a very complex dataset that calls for sophisticated data reduction analyses and sophisticated longitudinal data analyses. We are not there yet…. In fact, with regard to analyses, we are at the early stages of beginning to explore these data.As a result, what we will be sharing with you today are some very basic descriptive findings and some preliminary looks at how variables relate to one another. Even so, we believe that these findings are interesting and valuable.
39___________________________________ POSITIVE CORRELATIONAs the value of one variable increases,the value of a second variable also increases___________________________________+ correlation(r = +1.00)HigherMedianIncomeBecause many of findings are reported in the form of simple bivariate correlations – that is measures of association between two variables, we think it’s worth a minute to review the meaning of the correlation coefficient.Correlation coefficients can range from –1.00 toThis slide shows a perfect positive correlation (i.E., R = +1.00) between two variables: years of formal education and median income.This positive correlation coefficient means: as the value of one variable (years of education) goes up. So does the value of the second variable (median income)LowerYears of Formal EducationLessMore
40___________________________________ NEGATIVE CORRELATIONAs the value of one variable increases,the value of a second variable decreases___________________________________Higher- correlation(r = -1.00)UnemploymentRateThis slide shows a perfect negative correlation (i.E., R = -1.00) between two variables: years of formal education and unemployment rate.This negative correlation coefficient means: as the value of one variable (years of education) goes up. The value of the second variable goes down (unemployment rate)LowerYears of Formal EducationLessMore
41___________________________________ ZERO ‘0’ CORRELATIONThe relationship between the value of one variable and the value of a second variable is random___________________________________Zero Correlation(r = 0.00)TallerxxxxxxxxHeightxxThis third slide shows a zero correlation between two variables: years of formal education and heightThe scattering of data points indicates that there is no systematic link between the value of one variable (formal education) and the second variable (height). In other words, knowledge about one variable does not help you to predict the value of the second variable.xxxxxxxShorterYears of Formal EducationLessMore
42___________________________________ CORRELATION CANNOT BE DETERMINEDbecause the value of one (or both) variable(s) is constant or almost constant___________________________________HigherUnemploymentRateFinally, in some cases, the correlation between two variables cannot be determined due to lack of variability inOne or both of the measures.For example, if we are interested in the link between years of formal education and unemployment rate and WE only sample individuals with bachelors degrees, we cannot determine the link because we do not have full representation on one of our variables of interest.The important message is this: if you do not find an expected link between 2 variables, do not assume that the link does not exist --- first look to see that there is adequate variability on both measures in order to be able to determine if the link does or does not exist. This is a common situation encountered in research.LowerYears of Formal Education = BA, BS
43The Adoption Decision (Model 1) We now will consider the extent to which preliminary analyses support our first model which deals with the decision to adopt the 4 EBPS studied in iIDARP
44Time 1/First contact data This analysis is based on our time 1 or first contact data because it focuses on the decision --- roughly 90 projects are represented by these data.
45Likelihood of implementing as indicated by Stage Phase 1: A Decision Under RiskPerceivedRisk ofAdopting-.51Likelihood of implementing as indicated by StageImplementerAdopterWait & SeeNeverCapacity toManage orAbsorb Risk+.38Model 1 predicts that the decision to adopt innovations is a decision made under conditions of risk.Further, it predicts that the likelihood of adopting (as indicated by stage) would be:Negatively related to the perceived risk of adopting …. it was: the project level coef is -.51Positively related to the org’s capacity to manage or absorb risk…. It was: the project level coef is +.38And3. Positively related to the org’s past propensity to take risks… it was: the project level coef is +.20.Thus, preliminary analyses lend support to these three primary linkages in Model 1.+.20Risk-takingPropensity
46Likelihood of implementing as indicated by Stage Antecedents to Risk PerceptionsANT E C E D ENT SPerceivedRisk ofAdopting-.51Likelihood of implementing as indicated by StageImplementerAdopterWait & SeeNeverCapacity toManageRisk+.38We also mentioned that in additiion to measuring these three factors directly, we also considered a number of antecedent variables that we expected to be linked to these three risk-related assessments. We were interested in looking at antecedents because can give us some ideas or clues about what might be done to alter these risk-related perceptions about ebps.We will first consider antecedents to perceived risk. As you will see, some antecedents to perceived risk are linked to the innovation itself (i.E., Are innovation-level factors), others are linked to the organization &/or project and others are linked to the environment in which the organization at operates…..+.20Risk-takingPropensity
47Antecedents to Perceived Risk Innovation Level FactorsRelative AdvantageScientific EvidenceExperiential Evidence-.51-.20-.30PerceivedRiskOrg-Level FactorsKnowledge Set-. 43As expected, negative relationships were found between three innovation level factors and perceived risk:The perceived risk of adoption an EBP is seen as lower when:Perceived relative advantage (or, the extent to which benefits are seen as outweighing costs) is highThere is scientific evidence attesting to the effectiveness of the practiceThere is experience-based evidence (i.E. Success stories from orgs that have tried it) attesting to the effectiveness of the practiceAt the organization level, perceived risk is seen as lower by organizations which employ staff who are well versed in the innovation (i.E., That have an existing knowledge base/set about the innovation).3. Finally, perceived risk is seen as lower when professional and system norms are seen as favoring the adoption of a particular practice.(Bev: side note: the effects of this set of antecedents on implementation stage are fully mediated by perceived risk)Environmental FactorsNorms for Adoption-.45
48Antecedents to Risk Management EBP–Level FactorsEase of Use+ .45+ .25Craft SkillsCapacity to Manage RiskOrg–Level FactorsTop Mgmt. Support+ .50+ .63Additional expected relationships were found between antecedents and reported capacity to manage the risks associated with implementation:Positive relationships WERE FOUND BETWEEN 2 INNOVATION OR EBP-LEVEL FACTORS1) Ease of Use: Capacity to manage risk is seen as higher when the innovation is thought to be easy to put into use2) Craft Skills: Similarly, capacity to manage risk is seen as higher when the belief exists that most people trained to implement can do so consistently and competentlyPositive relationships were also found at the org/project level:Top management support for the innovation – Capacity to manage risk is seen as higher when top management support for the innovation is highDedicated resources – Capacity to manage risk is seen as higher when there are resources specifically earmarked to support implementation effortsIn addition, an expected negative link also was found:Environmental uncertainty: When the organizational environment is viewed as hard to predict (environmental uncertainty is high), capacity to manage risk is seen as lower.Bev: side note – if my memory is correct, capacity to manage risk fully mediates the effects of these antecedents on stage – and the effects of risk capacity work thru perceived risk (previous slide) (i.E. Its effects are fully mediated by perceived risk) on stageDedicated ResourcesEnvironmental FactorsEnvironmental uncertainty- .22
49Antecedents to Risk Propensity Organization-Level FactorsLearning EncouragementManagerial AttitudeAbout Change+.71RiskPropensity+.23Finally, two antecedents to organizational risk propensity were examined:Learning encouragement: Defined as the extent to which the organization tends to encourage and reward staff for trying new things and does not punish staff when these efforts failManagement attitude toward change: Defined as the extent to which managers of the organization generally believe that change results in good thingsPositive relationships were predicted and found between these 2 antecedents and the organization’s reported propensity to take risks.
50Summing Up: Model 1 1. Adoption decision is a decision involving risk 2. Organizations are more likely to adopt if:Perceived risk of adopting is lowCapacity to manage risk is highPropensity to take risks is highJUST READ THE SLIDE WITH THE EXCEPTION OF THE TITLE AND SAY:TO SUM UP WHAT THE DATA SAYS ABOUT MODEL 1: OUR MODEL OF THE ADOPTION DECISION …………….3. Antecedents have implications for action
51Model 2: Implementation Phase Understanding Outcomes of Implementationfor Adopters and ImplementersWe also examined whether primary linkages in our second underlying model were supported by data.As you may recall, the second model is concerned with outcomes related to implementation. As a result, the model is relevant only to organizations choosing to adopt .
52Time 2/Second Contact Data The findings reported now are based on information gathered at the time of second contact with those organizations that chose to adopt an ebp. Roughly 50 projects are represented in these analyses.
53Two classes of outcomes 1. Implementation Effectiveness(e.g., fidelity, assimilation)Recall, WITH REGARD TO MODEL 2, we are interested in two broad classes of implementation outcomes that are expected to be related to the overall success of implementation efforts:1) Measures related to implementation effectiveness such as fidelity and assimilation, and2) Measures related to the effectiveness of the innovation itself such as positive outcomes for consumers2. Innovation/practice Effectiveness(e.g., positive outcomes)
54Is implementation effectiveness related to innovation effectiveness? Positive outcomesReinvention1-.64We mentioned that the link between these two classes of measures is often assumed rather than directly studied.However, we took a direct look at these linkages.Findings lend support to our prediction that implementation effectiveness is related to innovation effectiveness ---In fact, we found a negative link between the extent to which the practice had been modified from its prescribed form (i.E., Extent the practice was reinvented) and reported positive outcomesAnd a positive link between the extent to which the practice had been assimilated into the organization’s routine and positive outcomes.Thus, early analyses support the expected link between these two classes of outcomes that relate to the success of implementation effortsAssimilationPositive outcomes.611 Self report; reflects extent to which practice was modified
55Level 4: Inter-organizational Level 5: EnvironmentalModel 2Level 4: Inter-organizationalLevel 3: Adopting organizationLevel 2: Project levelLevel 1: Innovation levelOur second model also hypothesized that these two classes of implementation outcomes would be explained by variables spanning multiple levels of analyses.Simple bivariate correlations from our second contact data lend support to this idea.Dependent Variables:Implementation effectivenessInnovation effectiveness
56Assimilation: One measure of implementation effectiveness Let’s first take a look at factors linked to one measure of implementation effectiveness: assimilationRecall, assimilation is defined as the extent to which the practice/ebp is viewed as permanent, or part of ongoing organizational operations. ….
57Is Assimilation Explained by Variables at Multiple Levels? Our first model 2 question: is assimilation linked to/explained by variables at multiple levels?
58Some Examples DYAD: Communication quality +.45 ORG: Learning cultureCentralizationPROJECT: Dedicated resourcesEase of useINNOV: Fit w/Tx philosophy +.45The answer is a resounding yes!Assimilation was found to be positively related to variables spanning multiple levels of analysis.Positive relationships were found between assimilation, and….:At the dyad or ior level…. With the quality of communication between the ccoe and the adopting organizationAt the organizational level… with the extent to which the organization is seen as having a ‘learning culture’ and with the extent to which organizational decision making is centralized (i.E., Decision making authority is in the hands of only a few top managers)At the project level….. With the extent to which specific resources are earmarked or dedicated to to implementing the ebp and with the extent to which implementing the ebp is reported to be relatively easy, and finally,At the innovation level….. With the extent to which the ebp is seen as generally compatible with the organization’s treatment philosophy.Dependent Variable: Assimilation** Extent practice seen as part of permanent operations
59Variables at multiple levels are related to reported assimilation. Thus, as expected implementation effectiveness, as measured by assimilation, is related to factors at multiple levels of analysis.
60Are views about positive outcomes explained by variables at multiple levels? Our second model 2 question:Are views about positive outcomes (which are measures of innovation effectiveness) explained by variables at multiple levels?
61Positive outcomes Overall positive consequences Positive outcomes for consumersPositive impact on organization’s imagePositive impact on organization functioningOverall positive impactExtent expectations realizedFor purposes of this presentation – we are considering 6 indicators of positive outcomes which include:The extent to which implementation of the ebp is seen as leading, overall, to positive consequences.The extent to which implementation of the ebp is seen as leading to positive outcomes for consumersThe extent to which implementation of the ebp is seen as having a positive impact on the organization’s imageThe extent to which implementation of the ebp is seen as having a positive impact on the organization’s functioningThe extent to which implementation of the ebp is seen as having an overall positive impactThe extent to which implementation-related expectations have been realized
62Some Examples DYAD: Identification +.40 to +.60 ORG: Risk mgmt to +.40PROJECT: Perf. monitoring to +.74Access to TA to +.66Reinvention to -.49INNOV: Scientific evidence to +.60Bi-variate correlations indicate that variables at multiple levels of analysis are significantly related to these 6 different indicators of positive outcomes of implementing the ebp.In the examples shown above, linkages are significant with all six outcomes and the variables indicated but only the range of correlation coefficients is shown.At the dyad or ior, the extent to which staff at the implementing organization can identify with ccoe staff is positively related to reported positive outcomes…At the organization level, the extent to which the organization is seen as having the capacity to manage the risks of implementing (e.G., Deal with problems, etc., Barriers etc) is positively related to reported positive outcomesAt the project level, the extent to which performance monitoring occurs with regard to implementation and the extent to which implementers have access to needed technical assistance are positively associated with positive outcomes,And, at the innovation level, the extent to which the ebp is seen as being supported by scientific evidence is linked to reported positive outcomes.One variable was found to be negatively associated with reported positive outcomes: reinvention. Reinvention is a project level variable reflecting the extent to which the practice was modified (from its prescribed form) during the course of implementation.Dependent Variable: Positive Outcomes
63Variables at multiple levels are related to perceived positive outcomes. Therefore, with regard to our 2nd model, we found that factors at multiple levels are directly related to innovation effectiveness as measured by 6 different indicators of positive outcomes of implementing ebps.These preliminary analyses lend preliminary support for models 1 and 2 but more sophisticated data analyses are needed to fully test these models..I will now turn the podium over to dushka. She will discuss the extent to which preliminary analyses lend support to our 3rd and 4th models …..Thank you.
64Model 3: Cross-Phase Effects on Implementation Outcomes Understanding Effects of Initiation-Phase and Decision-Phase Variables on Implementation OutcomesModel 3 deals with factors at the initiation and decision phases that have long-term effects on implementation outcomes
65Model 3: Cross-Phase Effects on Implementation Outcomes DecisionINITIATIONIMPLEMENTATIONOutcomesTime 1Time 1Time 2we are focusing on- how the initial perceptions about the practice and the CCOE and- how the process by which orgs make the decision to adopt the practicemay affect the long-term outcomes of the practice.click – whereas Model 2 involved T2 data only, here we are using T1/first contact and T2/second contact data. We are looking at how approaches and strategies used during the initiation and decision phases (at time 1) are related to outcomes 9 mos down the road, at T2.Again we looked at two outcomes: assimilation (which is our measure of implementation effectiveness) and positive outcomes (which is our measure of innovation effectiveness).Time
66Model 3: Initiation-Phase Effects Outcomes(Time 2)DecisionINITIATION(Time 1)IMPLEMENTATIONFirst we’ll look at how things that go on during the initiation phase –when organizations are initially exposed to and begin considering a new practice – can affect outcomes.Time
67Initiation-Phase Effects Expected Benefits +.44Relative advantageTrust CCOEResults demonstrability +.49This first slide has to do with the effects of the initiation phase variables on assimilation.Again, Assimilation was defined as the extent to which the practice is seen a a permanent part of the way the organization conducts business and is our measure of Implementation Effectiveness.The numbers in this slide are all bivariate correlation coefficients and you can see they are all significant.Long-term assimilation was greater if:staff had high expectations about the benefits of implementing, and here Expected benefits means that staff indicated that they were motivated to implement by things like improving consumer outcomes, improving the quality and efficiency of service, and political or strategic benefits for their organization.Assimilation was also greater if initial perceptions about the advantages of implementing outweighed the disadvantages.Assimilation was also related to initial beliefs about the CCOE. For instance, assimilation was greater if the agency staff initially perceived that the CCOE could be trusted, that they didn’t have any hidden agenda or motives.Finally, assimilation was related to the belief at the onset that the outcomes of implementation would be demonstratable or tangible.Assimilation
68Initiation-Phase Effects Expected Benefits to +.69Relative advantage to +.74Trust CCOE to +.57Results demonstrability +.26 to +.51This next slide has to do with the relationship between these same initiation-phase variables and positive outcomes for consumers and the organization at second contact. This is our measure of innovation effectiveness.The numbers in this slide represent the range of correlation coefficients for our five positive outcome variables.Again, you can see that the correlation coefficients are pretty high. Positive outcomes at second contact were related toThe number and strength of the expected benefits during initiation phase,Perceptions about the relative advantages of implementing the practiceTrust of the CCOEAnd demonstrability of the results -- extent to which the expected results were tangible.Positive Outcomes
69Model 3: Decision-Phase Effects Outcomes(Time 2)Decision(Time 1)INITIATIONIMPLEMENTATIONThis next group of variables have to do with things that happen early on, during the decision phase and how the way in which the decision was made can affect long-term outcomes.Time
70Decision-Phase Effects Objective decisionInformation accessInternal influenceOrganizational commitment +.37This slide demonstrates the relationship of decision-phase variables to assimilation at second contact.You can see that Assimilation is related to the extent to which the decision to adopt was based on objective rather than subjective decision-making strategies.Assimilation was also greater if organizations had access to high quality information during the decision phase to assist them in making the decision to adopt.Assimilation was greater when internal staff were involved and had an influence in the decision-making process. (internal influence variable)Finally, assimilation at T2 was related to the extent to which organizational leadership supported and demonstrated commitment to the decision to implement the practice at T1.Assimilation
71Decision-Phase Effects Objective decision to +.71Information access to +.61Internal influence to +.46Organizational commitment to +.52This slide demonstrates how these same decision-phase variables are related to positive outcomes at T2.Again, the numbers represent that range of correlation coefficients across the five positive outcomes considered and they are all pretty high.You can see that positive outcomes at T2 were more likely whenThe decision to adopt was viewed as objectiveThe decision-makers had access to information needed to make the decisionInternal staff were involved in the decision-making processAnd the organizational leadership demonstrated commitment to the adoption decision.Positive Outcomes
72Variables in earlier phases can have enduring effects on implementation outcomes. So, to sum up Model 3, we found that variables in earlier phases can have enduring effects on implementation outcomes. Things that happen and strategies used early on, when people are initially considering and making decisions about whether to adopt the practice, are really important to long-term outcomes.
73Model 4: Understanding Effects of Implementation-Phase Variables Over Time Okay, now we move on to look at the effects of implementation-phase variables. Our interest here is in understanding how implementation-phase variables affect outcomes over time.
74Model 4: Effects of Implementation Variables Over Time PASTImplementationPRESENTImplementationPRESENT OUTCOMESWe know that implementation tactics can change from one time-point to another. For example, leadership may support and closely monitor implementation early on, but then turn their attention to other issues as the practice gets up and running.We suspected that the effects of some implementation-phase variables were time dependent. That is, when looking at current outcomes, the implementation climate in the past is not as important as the present implementation climate.To examine this, we looked the extent to which outcomes were related to the implementation climate in the past (T1) and present (T2).What we found was that, in many cases, only present implementation characteristics affected outcomes. The current implementation climate is what mattered most. Past implementation characteristics had a much smaller effect or no effect at all on current outcomes – you can see this represented here with the dotted arrow going from past implementation to present outcomes and a wider solid arrow going from present implementation to present outcomes.a few examples…TIME
75Model 4: Effects of Implementation Variables Over Time PASTTop SupportPRESENTTop SupportPRESENT OUTCOMESThis slide demonstrates how top management support affects outcomes over time.You can see that past levels of top support are not related to any of the present outcome variables (that’s why there is no arrow leading from past top support to present outcomes).On the other hand, present levels of top support were significantly related to all the T2 outcomes.So it appears that having top support initially isn’t enough to influence outcomes 9 months down the road. The current level of support is most important. So top support has to be continuous or enduring to maintain good outcomes.Top support T1 & T2, r = .10, nsTIME
76Model 4: Effects of Implementation Variables Over Time PASTFreedom to Express DoubtPRESENTFreedom to Express DoubtPRESENT OUTCOMESNext variable … Freedom to express doubt.Here again, currently having Freedom to Express Doubts about the practice was related to all fo the positive outcomes, but having it initially, 9 months ago, was only related to about ½ the current outcomes.Outcomes were related to both past and current implementation climates, but more consistently related to the present climate.So best if continuous.Freedom to Express Doubt T1 & T2, r = .42TIME
77Model 4: Effects of Implementation Variables Over Time PASTAccess to Technical AssistancePRESENTAccess to Technical AssistancePRESENT OUTCOMESYou can see a similar pattern for Access to Technical Assistance.Level of access to TA initially (at T1) was related to less than HALF OF THE current outcomes.Present levels of Access to TA were related to all the present outcomes..Just having it initially isn’t enough. Has to be continuous to maintain higher outcomes.Access to TA T1 & T2, r = .49TIME
78Model 4: Effects of Implementation Variables Over Time PASTDedicated ResourcesPRESENTDedicated ResourcesPRESENT OUTCOMESHere is the last example,Having Dedicated Resources for implementation 9 months ago is related to none of the current outcomes. Having Dedicated Resources at the present is related to about half of the outcomes.So again, current climate matters most. Just initial support isn’t enough. Support has to be maintained.onslack T1 & T2, r = .53TIME
79Implementation strategies need to be sustained in order to have positive impacts on long-term outcomes.So the implication is thatImplementation strategies need to be sustained in order to have positive impacts on long-term outcomes.
80Shifting Gears: Comparing Different EBPs at Time One Now we are going to shift gears…Until now we’ve been describing how different variables can affect outcomes. This gives us an idea about things we can do to increase our chances of having good outcomes.But we also know that some of these things are more of an issue or more relevant for some practices than they are for others. For instance finding resources that will support implementation may be more of an issue for practices the require more complex changes in organizational structure, jobs.So in this last section I am going to show you some comparisons between the four EBPs that we examined in this study. We wanted to get an idea of issues and problems that are confronting organizations adopting each of these practices.We are using only T1/first contact data here. The results are consistent at T2, for the most part, but we have our largest number of participants at T1, so we focused on those data.
81Do adopting organizations hold similar views about the four practices? Clustering (n = 23)MST (n = 16)OMAP (n = 15)IDDT/SAMI (n = 16)IDDT/SAMI with Initial Funding (n = 12*)The basic question is: Do adopting organizations hold similar views about the four practices?Note that the organizations adopting IDDT were divided into two bottom.This is because there are 12 sites that received startup funding for initial implementation and have been implementing for several years.The other sites received no or only minimal outside funding (include traditional, national toolkit & BHOs).We wanted to examine the effects of receiving this funding. So we considered the funded sites separately.So, the next several slides are comparisons between these five groups.* 9 funded demonstrations; 12 projects
82Organization: Organizational Commitment MST, IDDT & IDDT-FUNDED > OMAP StronglyAgreeThis first slide represents perceptions of organizational commitment. Participants responded to statements like, “The organizational leadership has been willing to put forth a great deal of effort to see that the decision to implement IDDT is successful.”You can see that ratings are positive overall. 5 = somewhat agree, 6 = agree.Sites adopting MST & IDDT reporting more organizational commitment than sites adopting OMAP. Orgs adopting clustering fell in the middle – didn’t differ from any of the other groups.StronglyDisagreeIDDTMSTOMAPIDDT-FundedClustering
83Innovation: Experiential Evidence MST, IDDT-FUNDED & IDDT > CBP > OMAP StronglyAgreeThis next slide deals with experiential evidence.Participants responded to statements like,“The experience of other organizations implementing IDDT convinced me of its effectiveness.” and “Orgs that have implemented have evidence that it is an effective approach.”The mean ratings overall were between‘4’ = neither agree nor disagree and ‘6’ = agreeOrganizations adopting MST and IDDT were more likely to indicate that there was substantial experiential evidence than organizations adopting Cluster-Based Planning, and organizations adopting OMAP were the lowest in terms of perceived Experiential Evidence..StronglyDisagreeIDDTMSTOMAPIDDT-FundedClustering
84Innovation: Scientific Evidence MST, IDDT-FUNDED & IDDT > CBP & OMAP StronglyAgreeNext construct has to do with perceptions about the extent of scientific evidence supporting those practices.You can see a similar pattern here with organizations adopting MST and IDDT perceiving more scientific evidence of effectiveness of these practices than organizations adopting CBP or OMAP.(“there is substantial scie ev supporting the effectiveness of this practice”)StronglyDisagreeIDDTMSTOMAPIDDT-FundedClustering
85Innovation: Magnitude of Change Required to Implement MST, IDDT-FUNDED & IDDT > CBP & OMAP This next slide represents participants perceptions of the complexity of changes required to implement the model.Notice that the scale here is different. Responses were weighted so that changes in things like organizational structure are weighted higher than rule and procedural changes.The scale ranged from 1 to 15 – one being simple changes in rules or procedures. Higher scores would mean more complex changes in jobs, department and organizational structure, and capital allocations.On average, sites adopting OMAP and Clustering reported making fewer complex changes than organizations adopting MST and IDDTIDDTMSTOMAPIDDT-FundedClustering
86Innovation: Fidelity Seen as Crucial to Implementing the Practice MST > OMAP, CBP, IDDT & IDDT-FUNDEDStronglyAgreeNext construct has to which fidelity was seen as crucial to implementing the practice.You can see that organizations adopting MST were more likely to perceive fidelity as crucial than organizations adopting the other practices.…consistent with what we know about MST -> having strict fidelity guidelines & supervisionA score ‘3’ = disagree somewhat, ‘4’ = neither agree nor disagree, ‘5’ = agree somewhat, ‘6’ = agree.Sites adopting MST were more likely to agree. The other sites were more likely to indicate that they neither agreed or disagreed with some IDDT sites indicating that they disagreed somewhat.1“maintaining fidelity is critical to getting expected results”StronglyDisagreeIDDTMSTOMAPIDDT-FundedClustering
87Implementation: Resources* for Initial Implementation No differences StronglyAgreeThis next slide represents the extent to which sites had the resources needed for initial implementation of the EBP – including money, personnel and time for planning, training.You can see that there were no significant differences between groups, though IDDT funded sites received some grant funding for start-up.The mean score was between 4 = neither agree nor disagree and ‘6’ agree that sites had resources for initial implementation.*money, personnel and time“this org had resources necessary to support initial implementation”StronglyDisagreeIDDTMSTOMAPIDDT-FundedClustering* money, personnel & time
88Implementation: Resources Implementation: Resources* for Ongoing Implementation OMAP > CBP, MST & IDDT > IDDT-FUNDEDStronglyAgreeThis next slide represents the extent to which sites perceived that they had the resources needed for ongoing implementation of the model.Scores are about the same as they were for initial funding, but the picture looks different for the funded sites. These sites received funding to cover initial implementation, but are continuing with agency-only funds. Looks like they are having more trouble than other sites accessing funding for the long haul.“org has resources/manpower to support ongoing imp”StronglyDisagreeIDDTMSTOMAPIDDT-FundedClustering* money, personnel & time
89Implementation: Problems Recruiting Staff MST > CBP, OMAP, IDDT & IDDT-FUNDED GreatExtentParticipants reported the extent to implementation was hindered by personnel recruitment problems – such as finding willing and qualified people to implement the model.We used a different 10-point scale here. From no-extent to great extent.You can see that this was a much greater problem for MST sites than sites adopting the other practices.Sites adopting MST indicated that this problem was to a considerable to great, while the other sites indicated small to moderate extent.No ExtentIDDTMSTOMAPIDDT-FundedClustering
90Implementation: Reinvention No differences Very GreatExtentThis construct assesses the extent to with participants believed that their organization departed from what was prescribed by the developers of the EBP versus implementing “to the letter” as prescribed.No difference – moderate (4) small (3) or very small extent (2).“extent to which org made modifications to the way the practice was implemented.” “extent to which most critical and central elements of model were implemented.”No ExtentMSTOMAPIDDTIDDT-FundedClustering
91Outcome: Assimilation CBP, OMAP & IDDT > IDDT-FUNDED Very GreatExtentThis slide represents participants rating of Assimilation or the extent to which the practice has become a permanent part of the way the organization conducts business.You can see that the IDDT funded sites reported lower levels of assimilation than the other sites.Again this may be related tot the funding issue. Without continued funding, these sites have had difficulty creating the structures to keep the implementation going.(“extent to which practice has become a permanent part of the way the org conducts business.”)So you can see that there are a number of differences in the experiences that organizations are having as they try to implement these practices.No ExtentIDDTMSTOMAPIDDT-FundedClustering
92A peek at interview data qualitativeSo far you have been introduced to the theoretical background and quantitative support for the IDARP models. Now we’ll add some color to this picture by looking at some of the qualitative interview data.
93ApproachInterview team produces transcripts from interviews with multiple agency sourcesQualitative “codes” attached to text in Atlas Ti Software program such as:Issue diagnosis & decision processPlanning process for EBP implementationFacilitators & BarriersExpected/unexpected, +/- outcomesUnit of Analysis = “mentions” or coded phrasesOur approach to the qualitative data is as follows: The interview team produces transcripts from interviews with multiple agency sources. The interview is then imported to our qualitative software and codes are attached to the text. Currently, we have 128 distinct codes to categorize issue diagnosis, the decision and planning processes, and several types of outcomes. Today’s discussion will center around Facilitators and Barriers mentioned during the interviews. The unit of analysis here is a “mention” or a coded phrase.
94Focus of today’s look at qualitative data Data collected during time one/first contact with 36 projects (~ 3 interviews per)Projects X Stage18 Implementers7 Adopters7 Wait & See/Never4 De-adoptersProjects X EBP13 IDDT9 Cluster-Based Planning7 OMAP7 MSTOur focus today is data collected during first contact with 36 projects, approximately three interviews per project. This is out of 96 total first contacts made to date, so this data represents more than 1/3 of the qualitative data.This slide also shows the breakdown of projects included in this analysis by stage and by EBP (can read these if there is time)
95Glossary of Categories CCOE - relating to the CCOE, its staff and services it provides.EBP – perceptions relating to the innovation.Money - expenses (actual or anticipated), funding of the EBP and financial issues that impact the agency.Staff - reactions, recruitment, retention and qualifications of staff.System - coordination, collaboration, and interest in Mental Health and other related systems.First we’ll look at facilitators and barriers in terms of five categories: The CCOE, the Evidence-Based Practice, itself, Money, Staff and the System within which the project operates.
96CCOE: Major Themes Facilitators (n = 119) Barriers (n = 36) Attended CCOE presentation/became aware of CCOE/had previous experience with (n = 55)CCOE provides instrumental help (n = 50)Positive reaction to CCOE (n = 11)Barriers (n = 36)CCOE doesn’t understand the agency’s issues or constraints (n = 9)We’ll start with facilitators & barriers related to the CCOE. At the top of each column, the total number of facilitator or barrier mentions is shown. Underneath that is a list of Major Themes. Because the list only shows the most frequently mentioned issues, the number of mentions in each column will not equal the total.To be included as a Major Theme, an issue had to be mentioned about TWO or more of the EBPs.(Read the Major Themes for Facilitators and Barriers for CCOE).
97CCOE: Mentions by Stage Here, the facilitator and barrier mentions about the CCOE are broken down by stage. Notice that the scale along the left side (y axis) of the slide goes to 150, although the highest bar only goes to 51. The scale will remain constant through this part of the presentation so that as we go through the slides, you can make a comparison across categories.Regarding CCOEs, across all four stages, facilitators greatly outnumber barriers.
98EBP: Major Themes Facilitators (n = 225) Barriers (n = 153) Received training or information (n = 53)EBP is good match with culture, systems, or similar to what we already do (n = 24)EBP might be useful (n = 19)Barriers (n = 153)Don’t know how to proceed – in the dark (n = 24)EBP isn’t a good fit to this organization (n = 13)EBP might NOT be useful (n = 13)Here are the Major Themes relating to the Evidence-Based Practice. There are 225 facilitator mentions including (read the three in this column).Under barriers, the first major theme is a lack of understanding of how to proceed, or feeling “in the dark”. The other two barriers illustrate that frequently the facilitator and barrier are the flip sides of the same issue. As a facilitator, we had that the EBP is a good match – as a barrier major theme we have that the EBP is NOT a good fit to the organization. Under facilitators, the EBP might be useful, while under barriers, the EBP may NOT be useful.
99EBP: Mentions by StageOnly deadopters mention more barriers related to the EBP than facilitators. Adopters and Implementers mention many more facilitators than barriers.
100Money: Major Themes Facilitators (n = 69) Barriers (n = 115) Received funds (n = 31)Identified potential funds (n = 24)There is a potential savings from the EBP (not necessarily for the agency) (n = 6)Barriers (n = 115)Agency has financial issues/EBP costs money (n = 80)Funding for the EBP is not sustainable (n = 29)Our funding shrunk/was lost/ended (n = 11)Basically, read this slide.
101Money: Mentions by Stage There are more barriers than facilitators at all stages regarding money. Wait & See/Never and Deadopters mention MANY more barriers than facilitators.
102Staff: Major Themes Facilitators (n = 122) Barriers (n = 171) Staff is interested/ supportive/likes the EBP (n = 25)Staff hired for program (n = 18)Staff thinks the EBP makes sense (n = 8)Barriers (n = 171)Recruitment & turnover issues (n = 67)Resistance to EBP, skepticism, lack of interest (n = 44)Competing priorities (e.g. innovation vs. productivity) (n = 15)Again, basically read this slide. After the 3rd barrier: For example, the staff has to balance providing quality care in the implementation of this EBP with the constant demand for high productivity.
103Staff: Mentions by Stage Only adopters mention more facilitators than barriers re: staff issues.
104System: Major Themes Facilitators (n = 106) Barriers (n = 82) Support and interest in the system (n = 34)Collaboration/ cooperation/ communication & integration in the system (n = 33)Barriers (n = 82)Lack of support/no interest (n = 29)Conflict, lack of collaboration between important entities, no communication (n = 29)Competing priorities & turmoil in system (n = 9)This category refers to the system in which the project operates. Depending on the EBP, the system might include the Mental Health Board, the Drug & Alcohol System, the Justice system, Member agencies of the Family Children First Council or local Mental Health hospitals. (Read slide).
105System: Mentions by Stage Adopters and Implementers mention more facilitators than barriers for this category – a trend not carrier through to Wait & See/Never and deadopters.
106Summing Up: Facilitator/Barrier Analysis - Category Overall, facilitators were mentioned more often than barriers (641:557).EBP: The category with the most mentions of facilitators (225);Staff: The category with the most mentions of barriers (171).SystemSystemStaffStaff$$EBPThis slide summarizes facilitator and barrier mentions by category. (Read the three points).EBPCCOECCOE
107Facilitator & Barrier Analysis -Phase Facilitators and barriers can usually be identified as occurring during specific phases of the process.The next analysis separates most of the same “mentions” in terms of phase in which they occurred.For our final breakdown today, we will look at the same group of facilitator and barrier mentions, but this time we will group them by phase of the project. Not all mentions fit in a single time phase, so the count of total mentions is somewhat less in this analysis.
108Facilitator & Barrier Analysis – Initiation Phase Initiation Phase – A facilitator or barrier that is anticipated or experienced PRIOR to the adoption decision.Initiation Phase Facilitators = 229Initiation Phase Barriers = 91(Read Slide)
109Mentions during Initiation Phase This time, the scale along the left side of the slide will reach to Note that during the Initiation Phase there are more facilitators than barriers mentioned across all stages.
110Early in the Implementation Phase Early Implementation – A facilitator or barrier that is anticipated or experienced AFTER the adoption decision, but before full implementation.Early Implementation Facilitators = 166Early Implementation Barriers = 122Read slide.
111Mentions during Early Implementation The Wait & See/Never stage drops out here. Adopters and Implementers still mention more facilitators than barrier during Early Implementation.
112Implementation PhaseImplementation – A facilitator or barrier that is anticipated or experienced AFTER the agency begins to implement the EBP.Implementation Facilitators = 155Implementation Barriers = 250Read Slide.
113Mentions during Implementation Phase As agencies become deeply involved in implementation, there are more mentions of barriers. This is to be expected, since a lot of problem solving goes on in this phase.
114Summing Up: Facilitator/Barrier Analysis - Phase Initiation: Facilitators are mentioned more than TWICE AS FREQUENTLY as barriers.Early Implementation: Adopters & Implementers mention 50% more facilitators than barriers (trend not seen in Wait & See/Never or Deadopters).Implementation: While there are more barriers than facilitators mentioned throughout the Implementation phase, Deadopters mention nearly FOUR TIMES more barriers than facilitators.Summing up the Facilitator/Barrier Analysis by phase: (read the slide)
116Major Messages Adoption decision is a risky decision Implementation effectiveness related to but not equal to innovation effectivenessFactors at many levels contribute to successWhat happens early (e.g., during initiation) can have enduring effectsPresent implementation climate explains present outcomes
117SOME IMPLICATIONSImportant for expectations to be realistic --- but make sure benefits are as salient as the costs -- Perceived risks of adopting EBPs can be lowered without painting an overly rosy picture as unmet expectations lead to dissatisfaction; alternatelyImportant to track both implementation effectiveness (e.g., fidelity) and innovation effectiveness (e.g., impact) even when evidence supports the effectiveness of a given practiceFactors at many levels contribute to success – but project level variables seem to be particularly important to understanding current actions and reactions --- climate for implementing a particular project can change over time ….. Climate needs to be trackedOutcome monitoringAccess to TATop management supportRewards/ReinforcementsDedicated resourcesRemoval of obstacles – including barriers to task/job performanceOpportunity to express doubtWhat happens early (e.g., during initiation) can have enduring effects – suggests that info needs to be available, suggests clear formulation and implementation tacticsOther thoughts --- barriers are ‘discovered during actual implementation…… need these to become known …. Need proven solutions and strategies…. Nature of interventions likely to vary based on EBP and stage .. For example, if scientific evidence is not an issue for some EBPs… less liklely to get this info out …. Importance of IORs….important of familiarity/prior history and homophily…. Connectedness…. Trust… importance of adequate CCOE resources ---- for insuring access to TA ---- monitoring progress from an operational perspective….