Presentation is loading. Please wait.

Presentation is loading. Please wait.

Using Evaluation Results and Building a Long-Term Research Agenda

Similar presentations


Presentation on theme: "Using Evaluation Results and Building a Long-Term Research Agenda"— Presentation transcript:

1 Using Evaluation Results and Building a Long-Term Research Agenda
Diana Epstein, Ph.D, CNCS Office of Research and Evaluation Adrienne DiTommaso, MPA, CNCS Office of Research and Evaluation

2 Learning objectives By the end of this presentation, you will be able to: Recognize the importance of building a long-term research agenda Identify the various stages in building evidence of a program’s effectiveness Understand the key questions to consider prior to developing a long-term research agenda for your program Understand the importance of communicating and disseminating evaluation results to stakeholders Determine meaningful programmatic changes based on evaluation findings, and learn how to implement them

3 Workshop overview Part 1: Building a long-term research agenda
Part 2: Evidence continuum Part 3: Scenarios Part 4: Using evaluation results Part 5: Q&A

4 PART 1 Building a long-term research agenda
Facilitator notes: We begin this presentation with an overview of what a long-term research agenda is, why it is important to develop one, and the relationship between a long-term research agenda and a program’s evidence of effectiveness.

5 What is a long-term research agenda?
A long-term research agenda is a series of intentional or planned program evaluations and research tools that build towards addressing a research goal Similar to a strategic plan, a research agenda generally spans over several years A research agenda is unique and should be tailored to each individual program A research agenda is a dynamic tool (i.e., a living document) that should be revised/updated based on new evidence, shifts in program direction, etc. Facilitator notes: A long-term research agenda is a series of intentional or planned program evaluations and research tools that progressively build towards answering a broad research question or research goal. As each evaluation activity is completed, the findings should be used to support and inform future evaluation activities and goals. Your program’s research agenda should be similar to its long-term strategic plan, in that your program’s research agenda should include your planned research activities over several future years. A number of unique factors help to determine a program’s long-term research agenda; therefore, a program’s research agenda should be tailored to the program’s goals, structure, resources, and an organization’s strategic plan. A long-term research agenda is a dynamic tool (i.e., a living document) that should be revised or updated based on new evidence, shifts in program direction, or other factors.

6 Long-term research agenda
Facilitator notes: Here you see this idea illustrated. -The vertical axis shows capacity – as a program develops more capacity, the types of evaluations it does and the research tools it uses can become more sophisticated. The horizontal axis shows time – as a program becomes more established and mature, the number and scale of the research activities it can conduct increases. -Certain activities, like performance measurement, should happen at all times and from the very beginning of a program’s lifespan. Data collection systems should also be developed and utilized from the very beginning. -Other activities develop over time, as a program builds capacity and becomes more established. Data collection systems are refined and become more sophisticated, and eventually a program begins to do program evaluation. -This picture also shows that many activities can and should occur simultaneously, no matter how well-established a program is or how much capacity it has. Performance measurement is ongoing, and program evaluation is not simply a one-time task. -All these various activities should be aligned and build upon one another in service of a program’s research goal, or goals.

7 Why is it important to have a long-term research agenda?
A research agenda sets clear goals for what program stakeholders want or need to know about the program years into the future A research agenda defines your destination, then identifies the supporting steps that will get you there A research agenda continues to build evidence of program effectiveness A research agenda demonstrates strategic investment of funds in evaluation activities Facilitator notes: There are a number of reasons why it is beneficial to develop a long-term research agenda for your program. A research agenda sets clear goals for what program stakeholders want or need to know about the program years into the future. CNCS does not expect grantees to evaluate their entire program at once, but we recommend that grantees conduct a series of evaluation activities over time. This will help to more strategically commit resources for evaluation by designing studies that build upon one another and by creating complementary evaluation tools and data collection systems that can be deployed multiple times. A research agenda defines your destination and then identifies the supporting steps that will get you there. It is important to first lay a foundation for evaluation activities through the completion of important early, developmental work. A long-term research agenda should be designed to continue to build evidence of the program’s effectiveness over time. Furthermore, as additional evaluations of the program are designed and planned, the findings from previous evaluations should inform these new efforts. A research agenda demonstrates strategic investment of funds. Planning evaluation activities such that they comprise a program’s long-term research agenda allows a program to maximize its resources for evaluation and provide information that will benefit its overall development.

8 Build a long-term research agenda
What does a long-term research agenda look like?  What do we want to have learned 5 years from now? 10 years from now? Work backwards: Define your destination, then name the supporting steps that will get you there Each evaluation should build on what you learned previously If you invest evaluation money strategically, scarce resources can have a big impact Facilitator notes: As you begin to build your long-term research agenda, ask program staff and various stakeholders “what does a long-term research agenda look like for this organization?” -Figuring out what you want to know 5 or 10 years in the future will help you spend evaluation money more strategically by laying out studies that build upon one another, and will allow you to create complementary resources that can be used multiple times. -Each evaluation should build upon what you learned in earlier steps. -As we noted in the last slide, it is important to first lay a foundation for evaluation activities through the completion of some important early, developmental work. For example, if your program’s stakeholders ultimately want to obtain strong causal evidence for the program by implementing a randomized control trial (RCT) or a quasi-experimental design (QED), it is important to identify what supporting steps need to be taken to reach that goal. -You may need to begin with descriptive studies and start collecting routine program data. This should happen during your first grant cycle. Then you might develop and administer an annual survey, field a process evaluation, and invest in building additional data collection instruments. Depending on your program size and its evaluation requirements, this could happen in either the first or the second grant cycle. Then you might use those same data collection instruments again but also collect data from a comparison or control group as you work toward implementing an RCT or QED. Again, depending on your program size and its evaluation requirements, this could happen in either the second or third grant cycle. -By thinking long-term and investing evaluation money strategically, your scarce resources can have a much larger impact.

9 Example of a long-term research agenda
AmeriCorps program provides housing assistance for low-income families. Goal: Demonstrate that the program has a positive impact on beneficiaries via a randomized control trial (RCT) Step 1: Collect program data, routinely, on family background characteristics and number of families served. [1st cycle] Step 2: Process study: Is the program being implemented with fidelity to the model? [1st cycle] Step 3: Collect pre/post outcome data each year via annual survey. [1st or 2nd cycle] Step 4: In addition to data collected from Steps 1&2, collect long-term outcomes data via follow-up survey (1 year post- program). [2nd cycle] Step 5: Demand for the program exceeds supply, so implement RCT by randomly assigning families to receive housing assistance. Collect background data and survey data from all eligible families. [3rd cycle] Facilitator notes: For example, consider the long-term research agenda for a hypothetical AmeriCorps program that provides housing assistance for low-income families. The program would eventually like to demonstrate that it has a positive impact on beneficiaries by implementing a randomized control trial evaluation. -This is a very worthy goal, but the program would be wise to build up to this point by investing in a number of earlier steps first. For example, during its first grant cycle the program might start by collecting data on the families it serves on a regular basis. Next could be a process study to assess if the program is actually being implemented with fidelity to its logic model. Based on results from the process study, the program would make adjustments to the model as needed. -Then it might design an annual survey and collect pre and post outcome data each year. Then, assuming the program is being implemented well, during its second grant cycle it could start to collect longer-term outcome data from families by designing and administering a follow-up survey. Finally, knowing that demand for the program far exceeds the available resources, the program could implement a randomized control trial (RCT) by randomly assigning families to receive housing assistance. This might happen in the second grant cycle, or possibly in the third. All the necessary data collection instruments have already been designed and tested in the earlier years, and this data is already being collected routinely from families served, so all the program really needs to do is collect the same information from the families who are not receiving housing assistance. Furthermore, the program has learned and improved at every earlier stage, so by the time they get to the RCT they have a stable, mature program model to evaluate. -By incrementally fielding studies as part of a long-term research agenda, not only has the program gradually built up its evidence base but it has put systems in place that made the final more rigorous evaluation much less costly than it otherwise would have been. -The grant cycles here are just a guide – we know that some grantees may need to spend more time on earlier steps.

10 Example: Stages in a long-term research agenda
Facilitator notes: Here is the same long-term research agenda, just presented in visual form. It shows how the program has gradually built up to its research goal over time, adding activities that build upon one another.

11 What to consider when developing a long-term research agenda
Program maturity How long the program has been in operation and its grant cycle timing Existing evidence base Evidence that has already been generated on the program that the long-term research agenda should build off Funder requirements and other stakeholder needs CNCS has specific evaluation requirements for its grantees and those requirements should be embedded in a program’s long-term research agenda Sometimes the same evaluation can meet the needs and requirements of multiple funders Facilitator notes: The scope and depth of a program’s long-term research agenda is influenced by a number of factors. These factors include: Program maturity This refers to how long the program has been in operation and its grant cycle timing. Key research questions and needs for information will differ based on a program’s maturity. If your program is in an early stage of development (1st grant cycle), your long-term research agenda may include developing the appropriate data collection systems and identifying baseline information needed to measure program outcomes. On the other hand, a more established program (6+ years in operation, 3rd grant cycle) likely already has data collection systems in place and is measuring program outcomes in some capacity. Thus a long term research agenda for a mature program may be different then a newer program. Factoring in a program’s level of maturity when developing a long-term research agenda will help ensure that the research agenda is appropriately focused and feasible for the program to carry out over time. Existing evidence base This refers to evidence that has already been generated on the program. As noted earlier, one aspect of a long-term research agenda is that it should be designed to build evidence of the program’s effectiveness over time. Doing so allows a program to maximize its resources for research and evaluation to provide information that will benefit the program’s overall development. Building on past evidence also means that a program doesn’t waste time and resources by repeating the same studies over and over. Funder requirements and other stakeholder needs Grantees should take into account CNCS’s evaluation requirements and find ways to build those requirements into their long-term research agenda in order to maximize resources. CNCS has different evaluation requirements for large and small grantees in terms of which evaluation design they may use to assess their programs. Large grantees are those receiving annual CNCS funds of $500,000 or more. Small grantees are those receiving annual CNCS funds of less than $500,000. CNCS’s evaluation requirements also differ depending on the number of years of AmeriCorps funding a program has received. Also consider the needs and requirements of other stakeholders such as your board and community members, and note that sometimes one evaluation can meet the needs of multiple stakeholder groups.

12 What to consider when developing a long-term research agenda
Long-term program goals A long-term research agenda should be designed to systematically provide information that supports a program’s long-term strategic goals Long-term research goals Programs should have long-term research goals that relate to building evidence of effectiveness over time Evaluation budget The amount of the program’s funding base that will set aside for evaluation activities each year or each grant cycle Facilitator notes: It’s important to also consider… Long-term program goals This refers to a program’s long-term strategic goals related to strengthening its program activities, addressing areas for improvement, moving the program forward to new accomplishments, and possibly expanding to new sites or adding new services. A long-term research agenda should be designed to systematically provide information that supports a program’s long-term strategic goals. Long-term research goals In addition to having long-term program goals, programs should also have long-term research goals that relate to building evidence of effectiveness over time. A long-term research agenda should be designed to help programs accumulate evidence of effectiveness in a resource efficient manner. We will provide more detail on building evidence of effectiveness in Part 2 of this presentation. Evaluation budget The amount of the program’s funding base that will be set aside for evaluation activities each year or each grant cycle. A program’s evaluation budget should be considered in developing a long-term research agenda. Because programs may need to raise additional funds to carry out the research activities identified in a long-term research agenda, it is important to plan ahead.

13 Your AmeriCorps program
Exercise Part I: Key considerations in developing a long-term research agenda for your AmeriCorps program Your AmeriCorps program Program maturity Existing evidence Funder requirements Long-term program goals Long-term research goals Evaluation budget -Facilitator notes: This course has one exercise that we are going to present in phases throughout the presentation. -For this first part, fill in these key considerations for developing a long-term research agenda for your program. You have a copy in your handout. Program maturity: Existing evidence: Funder requirements: Long-term program goals: Long-term research goals:. Evaluation budget: -Keep this in mind as we go through the next sections of the presentation, and we’ll return to the next part later on.

14 PART 2 Evidence Continuum: Building Evidence of Effectiveness
-In Part 2 of this presentation, we will focus on building evidence of effectiveness, which is a key factor that influences a program’s long-term research agenda.

15 Building evidence of effectiveness
Evidence Based Stage 5: Attain causal evidence of positive program outcomes Stage 4: Obtain evidence of positive program outcomes Evidence Informed Stage 3: Assess program outcomes Stage 2: Ensure effective implementation Stage 1: Identify a strong program design Facilitator Notes: As an agency, CNCS continues to invest in a portfolio of programs reflecting a range of evidence, from evidence-informed to evidence-based. The diagram on this slide illustrates CNCS’s approach to building evidence which emphasizes that evidence of effectiveness is built over time and falls along a continuum. Evidence should be appropriate for a program’s life cycle and investment of public dollars. This diagram shows different stages to situate a program’s cumulative body of evidence along an evidence continuum. Having a long-term research agenda will help programs progress from stage to stage along the evidence continuum in a resource efficient manner. Programs are not expected to follow the same linear path to building evidence along the continuum. Some programs may accumulate evidence from output performance measurement activities (stage 2) to having causal evidence of effectiveness from an RCT/QED (stage 5). Others may move from collecting output performance measurement data (stage 2) to having pre/post outcome data (stage 3). Programs may also go back and forth between stages as they refine and adjust their program models over time. The key stages that a program goes through as they build their evidence base are shown in the diagram. Stage 1: Identify a strong program design Stage 2: Ensure effective implementation Stage 3: Assess program outcomes Stage 4: Obtain evidence of positive program outcomes Stage 5: Attain causal evidence of positive program outcomes -It’s important to note that these stages are not mutually exclusive and they do overlap, so consider the continuum to be a guide. Next, each of these stages will be discussed in more detail.

16 Evidence continuum

17 Exercise Part II: Building evidence of effectiveness for your AmeriCorps program
Evidence Informed Evidence Based Stage 1: Identify a strong program design Stage 5: Attain strong evidence of positive program outcomes Stage 3: Assess program outcomes Stage 2: Ensure effective implementation Stage 4: Obtain evidence of positive program outcomes -Facilitator notes: Now we’ll return back to the exercise. For this second part, identify where your program currently is on the evidence continuum and then consider where you want to go. Where would you place the stars for your program? -Keep this in mind as we go through the next section of the presentation, and we’ll return to the final part of the exercise later on.

18 PART 3 Scenarios Facilitator notes:
-For this next section of the presentation, we present two example scenarios of what a long-term research agenda might look like for different types of funded AmeriCorps programs. The first scenario is of a small, new AmeriCorps program, and the second scenario is of a large, established AmeriCorps program. Your program may fit with one of these better than the other, but there are lessons from both that will likely apply to your program and its particular stage of evidence-building.

19 Scenario 1: Building a long-term research agenda for a small, new program
Evidence Based Stage 5: Attain strong evidence of positive program outcomes Evidence Informed Stage 4: Obtain evidence of positive program outcomes Stage 3: Assess program outcomes Stage 2: Ensure effective implementation Stage 1: Identify a strong program design Facilitator notes: -Example scenario 1 is based on a small grantee (annual CNCS funds < $500k) implementing a new AmeriCorps program. This slide shows that for a small, new AmeriCorps program, their evidence of effectiveness is initially at stage 1, but the grantee intends to develop a long-term research agenda that will move them to the third stage on the evidence continuum.

20 Scenario 1: Logic model for a small, new, homelessness prevention program
Process Outcomes INPUTS ACTIVITIES OUTPUTS Outcomes Short-Term Medium-Term Long-Term What we invest What we do Direct products from program activities Changes in knowledge, skills, attitudes, opinions Changes in behavior or action that result from participants’ new knowledge Meaningful changes, often in their condition or status in life Funding 4 FT Staff 30 AmeriCorps members Training Provide case management housing relocation and stabilization services Provide educational workshops 50 families (head of households) received case management services 50 families (head of households) attended workshops Increase head of households’ knowledge of responsible home owner or tenant practices/skills Increase head of households’ knowledge of resources/services in community Increase head of households’ adoption of responsible practices/skills Decrease likelihood of foreclosures and evictions Reduce first-time homelessness in the community Facilitator notes: On this slide we present an example of what a logic model might look like for a small, new AmeriCorps program that focuses on providing services aimed at preventing first-time homelessness in a mid-sized urban community. The program has two major service activities. Case management services are provided to families that are most at-risk of a housing crisis. Members assist families by developing and implementing individualized plans to prevent a future housing crisis. Educational workshops are provided to families that are identified as being at moderate risk of a housing crisis. Members facilitate the workshops which are designed to help families maintain housing, increase or stabilize income, and make connections with services and supports to prevent the loss of housing in the future. Based on the logic model, the program has the following inputs and resources: funding, full-time program staff, AmeriCorps members, staff, and trainings in order to provide case management services and educational workshops. By carrying out these activities, the program expects to produce the following outputs each year: 50 families (highest risk) receive case management services and 50 families (moderate risk) attend workshops. The program is expected to achieve the following outcomes: Short-term - Increase head of households’ knowledge of responsible home owner or tenant practices/skills - Increase head of households’ knowledge of available resources/services in the community Medium-term Increase head of households’ adoption of responsible homeowner and tenant practices/skills Decrease likelihood of foreclosures and evictions Long-term Reduce first-time homelessness in the community

21 Small, new, homelessness prevention program
Scenario 1: Key considerations in developing a long-term research agenda Small, new, homelessness prevention program Program maturity AmeriCorps grantee with no prior years of program implementation and in its first grant cycle. Operating in only one community site. Existing evidence The program’s evidence falls in the first stage on the continuum as it has conducted a needs assessment to determine which program activities are most critical to the community it serves and a literature review to determine best practices for implementing core service activities. No evaluations have been conducted on the program. Funder requirements Small grantees must conduct an internal or an external program evaluation by the end of the second grant cycle. Small grantees are required to submit an evaluation report AND an evaluation plan with their recompete application after completing two or more three-year cycles. Long-term program goals Achieve full program operation with efficiency and fidelity to the program’s central model. Realize all expected program outcomes. Long-term research goals Generate data to facilitate program improvements and ensure an efficient, full operation of the program’s service activities. Generate data on the program’s short- and medium-term outcomes (see logic model). Evaluation budget 10-15% of the program’s annual funding has been set aside for evaluation activities. Facilitator notes: Some key considerations in developing a long-term research agenda for this small, new AmeriCorps program that provides homelessness prevention services include: Program maturity: The program is new, a first-time AmeriCorps grantee in its first grant cycle with no prior years of program implementation. In its first year of operation, the program is only operating in one community site. Existing evidence: As a new program, the program’s evidence falls in the first stage on the continuum. The program’s is considered evidence-informed based on conducting a needs assessment to determine which program activities are most critical to the community it serves and a literature review to determine best practices for implementing its core service activities. Funder requirements: For small grantees, CNCS requires an internal or an external program evaluation to be completed by the end of the second three-year grant cycle. Grantees are required to submit an evaluation report AND an evaluation plan with their re-compete application after completing two or more three-year cycles. Long-term program goals: Within a six-year time frame, the program intends to achieve full program operation with efficiency and fidelity to the program’s central model. It intends to realize all expected short- and medium-term program outcomes. Long-term research goals: The program’s long-term research goals are aligned with its programmatic goals and involve generating data to facilitate program improvements and ensure an efficient, full operation of the program’s service activities. Additionally, the goal is to generate data on the program’s targeted short- and medium-term outcomes, specifically assessing whether changes occur between beneficiaries’ program entry and exit. The outcomes that will be measured are the following: beneficiaries’ knowledge of responsible home owner or tenant practices/skills, knowledge of resources/services in community, adoption of responsible practices/skills, and likelihood of foreclosure or eviction. Evaluation budget: The program has a small evaluation budget and has set aside between 10-15% of its annual funding for evaluation activities.

22 Evaluation activities
Scenario 1: Long-term research agenda for a small, new, homelessness prevention program Evaluation activities Stage of evidence Grant cycle 1 Develop a logic model and a detailed program implementation plan. 1: Identify strong program design Pre-1 2 Create a data system to routinely collect performance measurement data and background data on program beneficiaries and AmeriCorps members. Program staff and members begin routine data collection activities. 2: Ensure effective implementation Pre-1 and 1 3 Develop a survey to collect short-term outcome data, focusing on beneficiaries knowledge of responsible homeowner/tenant practices and knowledge of resources and services in the community. Members administer pre/post surveys to program beneficiaries and analyze data. 3: Assess program outcomes 1 and 2 4 Conduct an internal process evaluation to determine if the program is being implemented with fidelity to the central model. Make data-driven adjustments to the program’s implementation as needed. 5 Conduct a non-experimental outcome evaluation using an external evaluator, measuring both short-term and medium-term outcomes. Facilitator notes: On this slide we present an example of a long-term research agenda for the small, new AmeriCorps grantee that takes into account the program’s logic model and other key characteristics. The grantee’s long-term research agenda outlines a set of evaluation activities that are designed to move the program from stage 1 (Identify a strong program design) to stage 3: (Assess program outcomes) on the evidence continuum. Over a six-year time frame, the activities include: Conduct a literature review to determine best practices for implementing the program’s core service activities – case management services and educational workshops to help at risk households develop and implement plans to prevent a future housing crisis. Develop a logic model and a detailed program implementation plan that lays out what and how each of the program’s core service activities are to be implemented, incorporating best practices identified in the literature. This first set of activities aligns with stage 1 on the evidence continuum and should have been accomplished before the program applied for funding. Create a data system to routinely collect performance measurement data and background data on program beneficiaries, program staff, and AmeriCorps members. Collecting background data on all individuals who are involved in the program helps lay the groundwork for a process evaluation. Once the core elements of the data system have been developed, program staff and members will begin to support routine data collection activities. This second set of activities aligns with stage 2 on the evidence continuum. Data collection systems should be in place before the program begins its first year of operation. Once the program is collecting performance measure routinely and accurately, they would develop a survey to measure the program’s short-term outcomes such as beneficiaries’ knowledge of responsible homeowner or tenant practices and knowledge of resources and services in community. After the survey has been developed and pilot-tested with a small sample, members will begin to administer the survey to program beneficiaries at program entry and program exit. This would start in the first grant cycle and would continue into the second grant cycle. This third set of activities aligns with stage 3 on the evidence continuum. Next, for the program’s first evaluation, they could conduct an internal process evaluation to determine if the program is being implemented with fidelity to the central model as depicted by the logic model and program implementation plan. Examples of questions that a process evaluation might address include: - Who is the program serving? Is the program reaching its target population? Are members implementing the case management service activity according to the program’s central model? Are members using the workshop curriculum according to the central model? What challenges are members facing in serving the program’s beneficiaries? They program would then make data-driven adjustments to the program’s implementation as needed. This fourth set of activities aligns with stage 2 on the evidence continuum and could happen during the first grant cycle, or possibly the second. Next, conduct a non-experimental outcome evaluation using an external evaluator, measuring both short-term and medium-term outcomes. Again, the outcomes that will be measured are the following: beneficiaries’ knowledge of responsible homeowner or tenant practices/skills, knowledge of resources/services in community, adoption of responsible practices/skills, and likelihood of foreclosure or eviction. This fifth set of activities aligns with stage 3 on the evidence continuum, and could happen during the program’s second grant cycle. In sum, this example long-term research agenda for a small, new AmeriCorps grantee covers stages 1, 2, and 3 on the evidence continuum with each of the evaluation activities designed to build evidence of program effectiveness.

23 Scenario 2: Building a long-term research agenda for a large, established AmeriCorps program
Evidence Based Stage 5: Attain strong evidence of positive program outcomes Evidence Informed Stage 4: Obtain evidence of positive program outcomes Stage 3: Assess program outcomes Stage 2: Ensure effective implementation Stage 1: Identify a strong program design Facilitator Notes: Example scenario 2 is based on a large grantee (annual CNCS funds > $500k) in its second three-year AmeriCorps grant cycle. This slide shows that for a large, relatively established AmeriCorps program, the program’s evidence falls at stage 3 on the evidence continuum, but the grantee intends to develop a long-term research agenda that will move them to the fifth stage on the evidence continuum. At the same time, the grantee’s research agenda includes evaluation activities to continue to generate evidence that falls under stages 2 and 3 on the continuum.

24 Scenario 2: Example logic model for large, established, environmental restoration program
INPUTS ACTIVITIES OUTPUTS Outcomes Short-Term Medium-Term Long-Term What we invest What we do Direct products from program activities Changes in knowledge, skills, attitudes, opinions Changes in behavior or action that result from participants’ new knowledge Meaningful changes, often in their condition or status in life Funding Staff 200 AmeriCorps State and National members 200 non-AmeriCorps volunteers Research Conduct forest enhancement and restoration Complete up-keep activities to enable native plants to survive Install 100,000 native trees and shrubs on public land Remove 50% of invasive plant species on 10 forest sites Increase diversity and coverage of native plant species    Reduce presence of invasive plant species Improve habitat spaces for wildlife Increase survival rate of native plant species and wildlife Maintain conservation of healthy, productive, sustainable ecosystems Facilitator notes: On this slide we present an example of what a logic model might look like for a large, relatively established AmeriCorps program that recruits members to protect and restore natural habitats. Improving habitat for state and federally listed species is a primary goal of the program and member projects focus on forest enhancement and restoration. The program has the following inputs and resources: funding, full-time program staff, AmeriCorps members, staff, enhancement and restoration projects on forest and wetland sites, and upkeep activities to enable native plants to survive. By carrying out these activities, the program expects to produce the following outputs in a three-year period: install 100,000 native trees and shrubs on public land and remove 50% of invasive plant species on 10 forest sites (25 acres in size). The program is expected to achieve the following outcomes: Short-term Increase diversity and coverage of native plant species Reduce presence of invasive plant species Medium-term Improve habitat spaces for native plants and wildlife Increase survival rate of native plants and wildlife Long-term Maintain conservation of healthy, productive, sustainable ecosystems

25 Large, established environmental restoration program
Scenario 2: Key considerations in developing a long-term research agenda Large, established environmental restoration program Program maturity AmeriCorps grantee in its second three-year AmeriCorps grant cycle. Already operating in multiple sites and expects to add additional service sites. Existing evidence Established data collection processes to collect performance measurement output and outcome data. Conducted internal process evaluation yielding evidence that the program is being implemented with fidelity in most service sites. Funder requirements Large grantees must conduct an external impact evaluation by the end of the second grant cycle. Large grantees are required to submit an impact evaluation report AND an evaluation plan for a future evaluation with their re-compete application after completing two or more three-year cycles. Long-term program goals Achieve and maintain fidelity of program implementation across all existing sites and any new service sites. Build stronger evidence of effectiveness to support future requests for higher levels of funding to expand program operations. Long-term research goals Conduct an external impact evaluation to assess the program’s short- and medium-term outcomes. Four to six years is the minimum amount of time for program outcomes to be realized. For this reason, the grantee will submit a request for an alternative evaluation approach for timing considerations. Evaluation budget 15% of the grantee’s annual funding has been set aside for evaluation activities. Grantee is seeking additional outside funding for the impact evaluation. Facilitator notes: Some key considerations in developing a long-term research agenda for a large, established AmeriCorps program that focuses on environmental restoration services include: Program maturity: The program has completed its first three-year AmeriCorps grant cycle, and has just received a second three-year cycle of funding. Members conduct environmental restoration projects in multiple sites and 2 new sites are to be added in the coming years. Existing evidence: During its first grant cycle, the grantee developed a program-wide database and established routine data collection processes for collecting performance measurement output and outcome data across all service sites. This program also conducted a process evaluation in its first grant cycle; while this is good practice, note that it is technically more than CNCS requires for the first grant cycle. Funder requirements: For large grantees, CNCS requires an external impact evaluation to be completed by the end of the second three-year grant cycle. Grantees are required to submit an evaluation report AND an evaluation plan with their re-compete application after completing two or more three-year cycles. Long-term program goals: The program’s long-term goals include maintaining fidelity of program implementation across existing program sites as well as new sites that are added during the second grant cycle. Additionally, the program intends to build up its evidence of effectiveness as part of a larger strategy to secure additional funding to expand program operations in the future. Long-term research goals: The program’s long-term research goals consist of conducting an impact evaluation that spans a six-year time frame to assess the program’s short- and medium-term outcomes, specifically whether there’s an increase in the diversity and coverage of native plant species, a reduced presence of invasive plant species, improved habitat spaces for wildlife, and an increase in survival rate of native plant species and wildlife. As an environmental restoration program, four to six years is the minimum amount of time needed for the program’s short- and medium-term outcomes to be fully realized. For this reason, the grantee intends to submit a request for an alternative evaluation approach for timing considerations, specifically requesting approval for a three-year extension to complete an external impact evaluation report. If their request is approved, the grantee will submit an “interim” evaluation report with their re-compete application after completing a second three-year grant cycle. More information on CNCS’s “Approval of Alternative Evaluation Approaches for ACSN Grantees” can be found on the Evaluation Resources page. Evaluation budget: The program has set aside 15% of its annual funding for evaluation activities and is seeking additional outside funding to cover the last three years of the impact evaluation study.

26 Evaluation activities
Scenario 2: Long-term research agenda for large, established environmental restoration program Evaluation activities Stage of evidence Grant cycle 1 Conduct a quasi-experimental design (QED) study using an external evaluator, measuring all short- and medium-term outcomes over a six-year time frame and relative to a matched comparison group of sites (i.e., adjacent non-serviced areas that are similar to the pre-restoration conditions at the treatment sites). 5: Obtain evidence of positive program outcomes 2+3 2 Continue to collect and analyze output and outcome performance measurement data on an annual basis. 3: Assess program outcomes 2, 3, 4, etc. 3 Conduct an internal process evaluation focusing on new service sites to determine if the program’s new restoration projects are being implemented with fidelity to the central model. Make data-driven adjustments to the program’s implementation as needed. 2: Ensure effective implementation Facilitator notes: On this slide we present an example of a long-term research agenda for a large, established AmeriCorps grantee that takes into account the program’s logic model and other key characteristics. The grantee’s long-term research agenda outlines a set of evaluation activities that are designed to move the program from stage 3 (Assess program outcomes) to stage 5: (Obtain strong evidence of positive program impact) on the evidence continuum while continuing to generate evidence that supports stages 2 and 3. The activities include: Continue to collect and analyze output and outcome performance measurement data on an annual basis. This set of activities aligns with stage 3 on the evidence continuum, and will occur in the second grant cycle and beyond. Simultaneous to those ongoing activities, conduct a quasi-experimental (QED) study using an external evaluator, measuring all short- and medium-term outcomes over a six-year time frame and relative to a matched comparison group of sites (i.e., adjacent non-serviced areas that are similar to the pre-restoration conditions at the treatment sites). Data collection for the QED study may include: A pre-restoration site condition survey at both treatment and comparison sites to document the dominant invasive plants at each site, existing native vegetation conditions and coverage, and existing habitat conditions; A post-restoration site condition survey administered at multiple time points over a six-year period at both treatment and comparison sites to record native and invasive plant cover estimates, native species richness, plant health and vigor, and habitat conditions. This set of activities aligns with stage 5 on the evidence continuum, and will start in the second grant cycle and continue into the third grant cycle. Conduct an internal process evaluation focused on new service sites to determine if the new restoration projects are being implemented with fidelity to the central model. Make data-driven adjustments to the program’s implementation as needed. This set of activities aligns with stage 2 on the evidence continuum, and will occur in the second grant cycle. In sum, this example long-term research agenda for a large, relatively established AmeriCorps grantee shows the grantee moving from stage 3 to stage 5 on the evidence continuum while also carrying out evaluation activities that fall under stages 2 and 3 to assess program implementation across new service sites and generate performance measurement data.

27 Evaluation activities
Exercise Part III: Long-term research agenda for your AmeriCorps program Evaluation activities Stage of evidence Grant cycle 1 2 3 4 Facilitator notes: -Remember, the goal of the exercise is to begin to build a long-term research agenda for your AmeriCorps program. -First, we asked you to fill in these key considerations for developing a long-term research agenda for your program: -Next we asked you to identify where your program currently is on the evidence continuum and then consider where you want to go. -Now for this last part of the exercise, fill in the evaluation activities that match up with each stage of evidence that you will be building as part of your long-term research agenda.

28 Important points to remember
A long-term research agenda is a developmental approach to evaluation whereby evidence of effectiveness is built over time A long-term research agenda is unique and should be tailored to fit each individual program There is value to building evidence at all stages along the continuum A long-term research agenda should reflect an iterative process where evidence is built gradually over time Facilitator notes: There are a few points that are important to remember when developing a long-term research agenda: A long-term research agenda is a developmental approach to evaluation whereby evidence of effectiveness is built over time, including your planned research activities over several future years. A number of unique factors help determine a program’s long-term research agenda, so it should be tailored to your program’s goals, structure, and available resources. There is value to building evidence at all stages along the continuum. While it may be more of a challenge for some programs to obtain evidence at the highest stage on the evidence continuum due to the structure of their program model, their program’s life cycle, and/or resource limitations, CNCS’s expectation is that programs will use the most rigorous evaluation approaches possible as they continue to build their evidence base. A long-term research agenda should reflect an iterative process where evidence is gradually built up over time.

29 Key points to consider when developing a long-term research agenda
Facilitator notes: Consequently, there are several key questions to consider prior to developing your program’s long-term research agenda, including: program maturity, your program’s existing evidence base, its funder requirements, its long-term program and research goals, and your program’s evaluation budget.

30 PART 4 Using Evaluation Results
Facilitator Notes: You’ve just learned about why and how a long term research agenda can help you build evidence of program effectiveness over time. A key assumption throughout our presentation so far has been that you will be using the information you glean from each evaluation to learn about and improve your program. Learning about and improving your program requires data, and those data are readily available through program evaluation. Evaluating your program, and then using the results to making meaningful programmatic improvements, helps your organization to serve more people and to serve them better. It also allows you to remain competitive in a scarce funding environment, justify increases in scale, scope, and reach, and demonstrate accountability to your stakeholders. Continuous assessment and regular evaluation work will enable you to become a learning organization, and will potentially produce a sizable evidence base that demonstrates your program’s efficacy.

31 Finish the evaluation process by using results for improvement
Planning Implementation Analysis and Reporting Action and Improvement Facilitator notes: Before we proceed, it’s helpful to think back to the evaluation cycle, which can be broken down into 4 steps: planning, implementation, analysis and reporting, and action and improvement. We’re concerned with this last step, and note that it’s taking place after you’ve already finished your analysis and reporting. You should begin to think about how your organization will use the learnings from an evaluation as you are wrapping up the reporting process. It is a convenient time to do this because you will have all of the information you need to review and make decisions with in one central and easily accessible location (such as in the evaluation report). As we mentioned at the beginning of the presentation, purposefully examining results and creating an action plan to implement improvements based on those insights is the critical step that enables continuous improvement and that makes a program a learning organization. Using evaluation results for action and improvement means that a program is being accountable to its theory of change, is examining decision-making on policies and program components, and is drawing lessons for program improvement.

32 Using evaluation results for action and improvement
You’ve completed your evaluation report, but what do the results mean in practice? How do these results translate into actions? Take your findings and make them actionable! Identify program components that are working well Identify program components that need to be improved Develop and implement an action plan for improvement Facilitator notes: To put this into practice, it’s helpful to divide the process into two stages: first, you will identify the program components that need to be improved or that are working well based on your evaluation results, and second, you will develop and implement a plan for enacting appropriate programmatic changes. Again, keep in mind that this is all taking place after the evaluation report is finalized. Even if you are conducting the evaluation internally, it is important not to reflect on the meaning of results or begin brainstorming changes within the final evaluation report. While it’s fine to suggest areas for future research or state new questions that have arisen during the completion of the evaluation, remember that making conjectures about findings within the evaluation report can diminish objectivity and credibility. In the next few slides, we’ll suggest some ways to go about documenting and tracking these ideas and associated improvements.

33 Identify program components to be improved
Pair results to the relevant research question: Did anything surprise you? Any interesting or confusing patterns and trends? Revisit logic model and theory of change Conduct additional analyses of the data if necessary Decide whether or not enough evidence exists to justify a program improvement Suggest possible improvements, actions, or changes Facilitator notes: As we just mentioned, in stage one, you will identify the program components that need to be improved based on your evaluation results. So how do you find program components that need to be improved? A good first step is to look back to your research questions. At the beginning of the evaluation process, research questions were created to guide the evaluation and ensure that it produced the information you needed about your program. You can try pairing each research question to relevant results from your analysis. These results are the “answers” to your research questions. Consider what these answers are saying: given what you’ve recorded in your theory of change and logic model, are the results expected? Or is there something surprising or unusual you are finding that deviates from the logic model? Did the evaluation identify interesting patterns or trends? If the result you are seeing is not in line with what is expected from your logic model and theory of change, you should flag that component for possible improvement. This includes unexpected positive results, as they could indicate something that should be expanded, scaled, or further developed. We will present a detailed example in a few slides to make this process even clearer. Throughout this process, keep in mind how the evaluation was implemented. If there were challenges or flaws that mitigate the finding, the specific results suggesting change may be due to how the evaluation was implemented rather than how the program is actually performing. In any case, if multiple sources support or corroborate a flagged finding, you will have conclusive evidence that a change needs to be made. Regardless, do bear in mind that one high quality evaluation finding can be all that is needed to support or acknowledge a programmatic improvement. Once you’ve identified the program components that need to be improved you will need to articulate how, specifically, they will be improved. This could involve a change to the program design or implementation, to the services that are delivered, to the internal staffing, or to partnering, for example. This is an appropriate time to engage relevant stakeholders in discussing the results, and obtaining input into the improvement process. Options for convening stakeholders include conference calls, in-person meetings, or online presentations. Keep in mind that you may need to create specific derivative products from the final evaluation report to facilitate these discussions. Memos or PowerPoint presentations are often helpful for focusing conversation and documenting proceedings. When brainstorming possible improvements, actions, or changes, it is important to set discussion parameters to ensure that realistic, actionable solutions are generated. Any change that could reasonably be implemented should be actionable, specific, and able to be accomplished within a realistic timeframe. Relatedly, make sure that the scale and scope of the change suggested matches that of the evidence generated by the evaluation findings. Again, we’ll go over an example in a few slides to make this more clear.

34 Developing and implementing an action plan for program improvement
Develop an action plan for implementing change Changes may include: The program design; how a program is implemented; how services are delivered; the staff, etc. Specify the logistics Who will carry out these improvements? By when they will take place, and for how long? What resources (i.e., money, staff) are needed to carry out these changes? Who can be an advocate or partner in change?    Facilitator notes: Once you have identified the parts of your program you want to improve, you can ensure follow through by developing an action plan. While this doesn’t have to be a formal document, it helps to have a record of the decisions made. This plan should include an “owner” or “owners” that will carry out the changes. You should set milestones and identify key dates for when certain actions must take place or be complete. It also helps to identify the types and amount of resources, such as money or staff time, you will need to carry out these activities. Once this plan is developed, you can invite stakeholders to review it. This will help hold your organization accountable, and demonstrates transparency to current and potential funders. Along with creating a feeling of stakeholder ownership in the program, it can help you recruit partners and additional resources needed to successfully implement your plan for change.

35 Thinking about the future
Evaluations pay dividends long after they are completed. An evaluation will: Build your program’s evidence base Contribute to a long term research agenda Facilitate continuous improvement and develop as a learning organization Facilitator notes: Developing and implementing your action plan for improvement demonstrates accountability and builds stakeholder trust in your organization. It also completes the evaluation process, bringing a close to the particular evaluation project. But, there is much more to be gained from your evaluation findings. An evaluation can pay dividends long into the future in a few key ways. First, every time your program conducts and completes an evaluation, information about program effectiveness compounds, providing a robust picture of how and why your program is or is not achieving its goals as detailed in your theory of change. This accumulation of programmatic knowledge gathered systematically from evaluation is known as an evidence base. This evidence base is valuable to your organization as it strives to improve services for constituents, to similar organizations as they improve their own program models, and to funders and policymakers as they attempt to direct scarce resources to the most effective organizations. Second, evaluations can be used strategically as part of a long term research agenda. A long term research agenda is simply a plan for a series of evaluations, completed over a longer time horizon, that build progressively towards an evaluation goal or broad research question about a program’s effectiveness. It requires moving beyond thinking of evaluations as one-time projects that exist in isolation. Instead, evaluations produce information that inform subsequent evaluations, or increase the organization’s capability to conduct a more rigorous evaluation. This change in thinking can be difficult, and often requires an intentional culture change within organizations. This effort is worthwhile, though, as a robust long term research agenda could save your organization a significant amount of time and resources, and can provide the right types of information at the right times in the life cycle of the program. Finally, in the long term, an evaluation also contributes critical information for continuous improvement. Reviewing the information provided by evaluation can help an organization set goals, measure progress towards meeting those goals, and make decisions based on the data gathered. It also naturally raises additional questions about program performance, which feeds back into the cycle of evaluation as newly proposed research questions. This feedback cycle makes possible the continuous program improvement that is characteristic of a learning organization.

36 Facilitated Example: Using results of an impact evaluation
Positive RCT findings: clients improve compared to comparison group Consult document review, performance measures data Compare results to logic model, theory of change to determine improvements Plan for and enact improvements Facilitator notes: Let’s review what we’ve covered in the last few slides with an example. We’re going to use a fictional financial education program, and use some hypothetical findings from an impact evaluation they conducted to identify areas for change and develop an action plan for improvement. For the purposes of our example, we’re going to suppose that the program conducted a randomized control trial supplemented with a review of program documents. Our hypothetical financial education program is an AmeriCorps State and National program that engages members in delivering a financial education curriculum to eligible clients. The program is a 10 site program in the greater Cincinnati region that places members in credit unions and local financial institutions to provide financial counseling to low‐income individuals, retirees, and young people. Members also assist with financial seminars and informational fairs, and recruit experienced financial professionals to serve as volunteers in a credit counseling program. Its major funders are AmeriCorps and a small foundation, and it partners with local banks and a local social services agency. This program has the AmeriCorps evaluation requirements for grantees receiving over $500,000 in grant funds per year, which means they need to complete an externally conducted impact evaluation. The evaluation must cover at least 1 program year, and needs to have findings ready to report upon recompetition. The evaluation will only focus on the program’s financial counseling component, and will explore the following research questions: First, Do low‐income clients exit the program with increased knowledge of personal finance concepts relevant to their needs? Do they know how to use those concepts to address their financial challenges? And second, are low‐income clients that participate in member‐led financial counseling through the program able to better manage their personal finances than low‐income individuals who did not participate in the program’s counseling?

37 Facilitated Example: Using results of an impact evaluation
Positive RCT findings: clients improve compared to comparison group Consult document review, performance measures data Compare results to logic model, theory of change to determine improvements Plan for and enact improvements Positive, significant difference between treatment and control group Clients are making progress in solving financial problem 6 mos. post program Facilitator Notes: Let’s suppose that the program ran their RCT and just received their final evaluation report from their external evaluator. In the findings section of the report, it is reported that there was a positive, significant difference between the treatment and control groups with regards to management of personal finances six months after the end of their participation in the program. The program is pleased, since according to their logic model and theory of change, they expect that a medium term outcome of the program is that clients will apply the knowledge gained through the counseling program to make significant progress towards solving a financial challenge. Since the findings show that clients are in fact making progress towards solving a financial challenge, as compared to those not in the program, the answer to the research question posed at the beginning of the evaluation supports the intervention as described in the logic model and theory of change.

38 Facilitated Example: Using results of an impact evaluation
Positive RCT findings: clients improve compared to comparison group Consult document review, performance measures data Compare results to logic model, theory of change to determine improvements Plan for and enact improvements Member activity logs show high level of consistency in type, duration, and quantity of service being provided by members Findings triangulated with performance measures data Facilitator Notes: This finding is definitely something the program wants to investigate further, because it could have implications for additional programming they could provide in the future. Specifically, the positive results indicating an effective intervention at the current sites helps make a case for increased program scale and reach. They look back to the evaluation report to see if the evaluator noted any important findings from the document review portion of the evaluation which could speak to what could be driving the positive result. The evaluator has reviewed the program’s member activity logs, and has noted that there is a high level of consistency amongst the type, duration, and quantity of service being provided by members. Since the program also has performance measures data available on these member activities, they consult the data to triangulate, or corroborate, that finding. The performance measures data reveals that members are completing the targeted number of sessions with each client, and are spending an appropriate amount of time with each client, as is defined in the program’s curriculum.

39 Facilitated Example: Using results of an impact evaluation
Positive RCT findings: clients improve compared to comparison group Consult document review, performance measures data Compare results to logic model, theory of change to determine improvements Plan for and enact improvements Facilitator Notes: With this contextualizing information, the program re-examines their theory of change and logic model. Because there is evidence that outcomes are being achieved, and that the curriculum is being implemented as designed, the program feels confident that the intervention is working as specified. In fact, it seems to be working well enough and with enough consistency across sites that they could expand the program to other sites with characteristics similar to those in the study. Even though the study revealed evidence of an effective intervention, there are still some positive improvements that can be made. First, the program can use the results to emphasize to sites the importance of following the curriculum with fidelity, and the importance of ensuring that members maintain a certain level of consistency in service provision across clients. The results also support the program’s efforts to maintain rigorous data collection and performance measurement systems, so program management will want to continue to invest in these capabilities. And finally, program management now has solid evidence to make a case for expanding the program and for obtaining additional staff and funds to do so. Findings align with program as described in logic model and theory of change

40 Facilitated Example: Using results of an impact evaluation
Positive RCT findings: clients improve compared to comparison group Consult document review, performance measures data Compare results to logic model, theory of change to determine improvements Plan for and enact improvements Facilitator Notes: Implementing these suggested improvements could be a lot of work, so again, it’s important that the program puts together an action plan for improvement. Remember, the plan would need to specify who will carry out the improvements; by when they will take place, and for how long; what resources (i.e., money, staff) are needed to carry out these changes; and who can be an advocate or partner in change. So, if the program decides it wants to emphasize to sites the importance of ensuring consistency of service provision across clients, they may decide to hold an all-site training day for members and staff. In this example, that part of the plan for improvement would note that the program management would be in charge of arranging and managing the meeting, and developing meeting materials; a convenient date would be set aside for everyone to come together; money for meeting space and materials, plus a sufficient number of staff would be allocated; and site staff would be recruited as partners in the training. Given the implications of the evaluation findings, the program may want to enhance their dissemination plan presented earlier in the webinar. A good addition might be a presentation at the all-site training day covering the evaluation results and implications, or wider distribution of the executive summary to members, potential new site partners, and additional funders and individual donors. This is also the time for the program to think about their next evaluation, and consider what new questions were raised as a result of this current evaluation. With their desire to expand services to other sites, an appropriate follow up evaluation could be an implementation study measuring fidelity to the program model at the new sites. The program will want to think strategically about future use of evaluation resources, and would benefit from compiling a long term research agenda to organize their future evaluation work around an ultimate evaluation goal. All site training day Enhance dissemination plan Think about next evaluation: fidelity study

41 Facilitated Example: Using results of an impact evaluation
Positive RCT findings: clients improve compared to comparison group Consult document review, performance measures data Compare results to logic model, theory of change to determine improvements Plan for and enact improvements Facilitator Notes: At this point, you can see how our hypothetical financial education program has used its evaluation results to make an improvement to their program, and to move towards serving more people, better. Additionally, the study has also enhanced the program’s evidence base, increased accountability to stakeholders, improved program operations, and contributed important learnings to other programs operating similar interventions. Continuous improvement Increase evidence base Serve more people, better

42 Resources on evaluation
Go to the National Service Knowledge Network evaluation page for more information: Other courses available: How to Develop a Program Logic Model Overview of Evaluation Designs How to Write an Evaluation Plan Budgeting for Evaluation Data Collection for Evaluation Managing an External Evaluation And more! Facilitator notes: For more information on evaluation, please go to the Evaluation Resources page located here.

43 Evaluation resources page

44 Questions?


Download ppt "Using Evaluation Results and Building a Long-Term Research Agenda"

Similar presentations


Ads by Google