Assessing Evidence and Past Performance 2014 AmeriCorps External Reviewer Training.

Slides:



Advertisements
Similar presentations
Evidence: What It Is And Where To Find It Evidence: What It Is and Where to Find It How evidence helps you select an effective intervention.
Advertisements

Alignment and Quality 2/8/2014 Alignment and Quality 1 What is it? What do I need to look for?
Evaluation Capacity Building Identifying and Addressing the Fields Needs.
How Can Using Data Lead to School Improvement?
Evidence: What It Is And Where To Find It Evidence: What It Is and Where to Find It How evidence helps you select an effective intervention Copyright ©
Using Data to Measure and Report Program Impact Anne Basham, MFA MEMconsultants.
Briefing: NYU Education Policy Breakfast on Teacher Quality November 4, 2011 Dennis M. Walcott Chancellor NYC Department of Education.
2014 AmeriCorps External Reviewer Training Assessing Need, Theory of Change, and Logic Model.
Project Monitoring Evaluation and Assessment
Service Agency Accreditation Recognizing Quality Educational Service Agencies Mike Bugenski
Reading the Dental Literature
Family Resource Center Association January 2015 Quarterly Meeting.
Evaluation is a professional and ethical responsibility and is a core part of PHN professional practice Commitment to evaluation helps build the PHN intelligence.
Member Training, Supervision, and Experience AmeriCorps State and National External Review Daniel Barutta and Sarah Yue, Program Officers.
Quality evaluation and improvement for Internal Audit
Evidence: What It Is And Where To Find It Building Evidence of Effectiveness Copyright © 2014 by JBS International, Inc. Developed by JBS International.
Community Planning Training 1-1. Community Plan Implementation Training 1- Community Planning Training 1-3.
Theory of Change and Evidence
Evaluation of Math-Science Partnership Projects (or how to find out if you’re really getting your money’s worth)
How to Develop the Right Research Questions for Program Evaluation
Writing a Research Proposal
Preliminary Results – Not for Citation Investing in Innovation (i3) Fund Evidence & Evaluation Webinar May 2014 Note: These slides are intended as guidance.
2014 AmeriCorps External Reviewer Training
Performance Measurement and Evaluation Basics 2014 AmeriCorps External Reviewer Training.
Reviewing the 2015 AmeriCorps Applications & Conducting the Review AmeriCorps External Review.
Moving from Development to Efficacy & Intervention Fidelity Topics National Center for Special Education Research Grantee Meeting: June 28, 2010.
Performance Measures AmeriCorps Project Director Training Saratoga, NY October 7 th – 9 th, 2013.
CNCS Evaluation Highlights Carla Ganiel, Senior Program and Project Specialist AmeriCorps State and National.
CRITICAL APPRAISAL OF SCIENTIFIC LITERATURE
Fundamentals of Evaluation for Public Health Programs ROBERT FOLEY, M.ED. NIHB TRIBAL PUBLIC HEALTH SUMMIT MARCH 31,
Cathy Burack and Alan Melchior The Center for Youth and Communities The Heller School for Social Policy and Management, Brandeis University Your Program.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Evaluating a Research Report
Overview of Evaluation Designs. Learning objectives By the end of this presentation, you will be able to: Explain evaluation design Describe the differences.
Mathematics and Science Education U.S. Department of Education.
Assisting GPRA Report for MSP Xiaodong Zhang, Westat MSP Regional Conference Miami, January 7-9, 2008.
Past Performance AmeriCorps State and National External Review Daniel Barutta and Sarah Yue, Program Officers.
Eloise Forster, Ed.D. Foundation for Educational Administration (FEA)
Preliminary Results – Not for Citation Strengthening Institutions Program Webinar on Competitive Priority on Evidence April 11, 2012 Note: These slides.
Preliminary Results – Not for Citation Investing in Innovation (i3) Fund Evidence & Evaluation Webinar April 25, 2012 Note: These slides are intended as.
LANSING, MI APRIL 11, 2011 Title IIA(3) Technical Assistance #2.
Classifying Designs of MSP Evaluations Lessons Learned and Recommendations Barbara E. Lovitts June 11, 2008.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
Training Individuals to Implement a Brief Experimental Analysis of Oral Reading Fluency Amber Zank, M.S.E & Michael Axelrod, Ph.D. Human Development Center.
AmeriCorps Grantee Training Evaluation and Research September 11, 2014.
VALUE/Multi-State Collaborative (MSC) to Advance Learning Outcomes Assessment Pilot Year Study Findings and Summary These slides summarize results from.
Best Practices in Demonstrating Evidence Diana Epstein, Ph.D, CNCS Office of Research and Evaluation Carla Ganiel, AmeriCorps State and National.
Copyright © Allyn & Bacon 2008 Intelligent Consumer Chapter 14 This multimedia product and its contents are protected under copyright law. The following.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
Anatomy of a Research Article Five (or six) major sections Abstract Introduction (without a heading!) Method (and procedures) Results Discussion and conclusions.
2016 AmeriCorps State: Grant Development October 15-30, 2015 Marisa Petreccia, Kate Pisano & Nancy Stetter.
1 Module 3 Designs. 2 Family Health Project: Exercise Review Discuss the Family Health Case and these questions. Consider how gender issues influence.
Evaluation Designs Adrienne DiTommaso, MPA, CNCS Office of Research and Evaluation.
Evaluating Impacts of MSP Grants Ellen Bobronnikov January 6, 2009 Common Issues and Potential Solutions.
Education Performance Measures Session. Session Overview Combination of presentation and interactive components Time at the end of the session for Q&A.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
Onsite Quarterly Meeting SIPP PIPs June 13, 2012 Presenter: Christy Hormann, LMSW, CPHQ Project Leader-PIP Team.
1 DEMONSTRATION PROJECTS TO ENSURE STUDENTS WITH DISABILITIES RECEIVE A QUALITY HIGHER EDUCATION PROGRAM Performance Measurement, Program and Project Evaluation.
Preliminary Results – Not for Citation Investing in Innovation (i3) Fund Evidence & Evaluation Webinar 2015 Update Note: These slides are intended as guidance.
Session 7: Planning for Evaluation. Session Overview Key definitions:  monitoring  evaluation Process monitoring and process evaluation Outcome monitoring.
So You Think You’ve Made a Change? Developing Indicators and Selecting Measurement Tools Chad Higgins, Ph.D. Allison Nichols, Ed.D.
[Presentation location] [Presentation date] (Confirm ABT logo) Building Bridges and Bonds (B3): An introduction.
Tell Survey May 12, To encourage large response rates, the Kentucky Education Association, Kentucky Association of School Administrators, Kentucky.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Technical Assistance on Evaluating SDGs: Leave No One Behind
Right-sized Evaluation
Chapter Eight: Quantitative Methods
2016 AmeriCorps Texas All-Grantee Meeting February 25-26, 2016
Presentation transcript:

Assessing Evidence and Past Performance 2014 AmeriCorps External Reviewer Training

Topics  Evidence  Past Performance 2

Preview of Assessment Questions (True/False)  If a study’s methodology is not clear and the applicant provided a citation, the reviewer should read the abstract online.  Quality indicators are assessed for every level of evidence.  A corrective action plan for performance measures demonstrates satisfactory past performance. (Practice Your Understanding)  Complete portion of Review Assessment Form 3

Evidence  The Notice of Funding Opportunity (NOFO) identifies levels of evidence  Applicants are assigned a level of evidence based on:  Number of studies  Type of studies  Whether evidence is from the applicant’s program or a similar program  Combinations described in NOFO  Additional quality considerations Evidence 4

Evaluation Study Designs Comparison Ability to make statements about causal attribution Experimental Design Studies Randomly Assigned Groups Quasi-Experimental Design Studies Statistically Matched Groups Non-Experimental Design Studies Not Statistically Matched Groups or Group Compared to Itself Evidence Evaluation Study Designs & Causal Impact 5

Levels of Evidence  No Evidence  Pre-Preliminary Evidence  Preliminary Evidence  Moderate Evidence  Strong Evidence Evidence: Levels of Evidence 6

No Evidence The applicant did not collect any data or cite at least one non- experimental study from a similar program. Evidence: Levels of Evidence 7

Evidence does not support conclusions about contribution to observed outcomes  Applicant has collected quantitative or qualitative data from program staff, program participants, or beneficiaries  Data used for program improvement or performance measurement Evidence: Levels of Evidence Pre-Preliminary Evidence 8

Pre-Preliminary Examples  Previous grantee reports that they have met their performance measure targets  Previous grantee reports that they did not meet performance measure targets and have changed program design as a result  Previous grantee recruiting opportunity youth to serve as AmeriCorps members gathers feedback from members at the end of the service year (if consistent with TOC) Evidence: Levels of Evidence 9

Evidence supports conclusions about the program’s contribution to observed outcomes Evidence: Levels of Evidence Preliminary Evidence 10

At least one non-experimental study conducted on the proposed program or another similar program that uses a comparable intervention  Demonstrates improvement in program participants over time OR  Implementation study (process evaluation) used to learn and improve program operations AND  The applicant was rated satisfactory on all four quality of evidence standards Evidence: Levels of Evidence Preliminary Evidence 11

Preliminary Examples  An outcome study that tracks program participants through a service pipeline and measures participants’ responses at the end of the program  Pre-and post-test research that determines whether participants have improved on an intended outcome (Includes National Performance Measures with pre/post test) Evidence: Levels of Evidence 12

A reasonably developed evidence base that can support causal conclusions for the specific program proposed by the applicant with moderate confidence. Evidence: Levels of Evidence Moderate Evidence 13

 One or more quasi-experimental studies conducted on the proposed program or another similar program that uses a comparable intervention with positive findings on one or more intended outcomes OR  2 or more non-experimental studies conducted on the proposed program with positive findings on one or more intended outcomes OR  1 or more experimental studies of another relevant program that uses a similar intervention AND  The applicant was rated satisfactory on all four quality of evidence standards Evidence: Levels of Evidence Moderate Evidence 14

Moderate - Examples  Well-designed and well-implemented quasi-experimental studies that compare outcomes between the group receiving the intervention and a statistically matched comparison group (i.e., a similar population that does not receive the intervention) Evidence: Levels of Evidence 15

Evidence base supports causal conclusions for the specific program proposed by the applicant with the highest level of confidence  One or more well-designed and well-implemented experimental studies conducted on the proposed program with positive findings on one or more intended outcome AND  The applicant was rated satisfactory on all four quality of evidence standards Evidence: Levels of Evidence Strong Evidence 16

Strong - Example  An experimental study that uses random assignment to select which individuals will receive the intervention and which will not (control group) Evidence: Levels of Evidence 17

Assessing Evidence Step 1: Number and type(s) of studies Step 2: Quality indicators, if applicable Step 3: Determine level of evidence Rely only on what is in the application – Do not look up additional information Evidence: Quality 18

Quality Indicators  Quality indicators are assessed only for preliminary, moderate and strong evidence  Quality indicators are not assessed for pre-preliminary or no evidence Evidence: Quality 19

Quality Ratings Satisfactory Response meets all or most aspects of the standard. Overall quality of the response is at least satisfactory. Unsatisfactory Response is low-quality and neglects to satisfactorily address more than one aspect of the standard. Overall quality of the response is lacking with room for assumption in key aspects. Evidence: Quality 20

Alignment of Models  The program model(s) studied is the same or nearly the same as the model the applicant will implement, in a similar context with similar target beneficiaries or entities. Evidence: Quality 21

Methodological Quality The study or studies used rigorous and appropriate research methodologies given the design (e.g. non-experimental, quasi- experimental, experimental), using high quality data, sufficient sample size/statistical power, and a representative sample to identify effects. The study or studies exhibited internal validity, i.e. any effects identified can be reasonably attributed to the program model given the methodological limitations. Evidence: Quality 22

Recency Studies conducted within the past six years are considered satisfactory. For studies older than six years, a “satisfactory” Rating may be given if there is reasonable confidence that the relevant conditions in which the program operated when studied are the same or similar as the conditions in which the applicant’s program will be operating. For example, for an educations program relevant conditions could include community demographics and educational standards. For an economic opportunity program relevant conditions could be economic climate. Evidence: Quality 23

Strength of Findings The findings from the study or studies indicate with confidence that the program model under study had at least one positive and significant effect on target beneficiaries or entities. “Significant” means the results were statistically significant, for example at the 95% confidence level (or p<.05). Evidence: Quality 24

Practice 1 Volunteers ‘R Us focuses on building the capacity of organizations to engage volunteers. The report “Volunteer Solutions for the Nonprofit Sector” identified eight steps or functions that should be adopted in order to achieve the greatest return on investment from volunteers. AmeriCorps members in our program assist nonprofit organizations to address all eight steps. Members develop a tailored volunteer program plan with their site. As part of this plan, members develop or refine procedures that collectively maximize the impact of volunteers, especially those volunteer management practices that, according to the Urban Institute’s Volunteer Management Capacity Study, correlate strongest to volunteer retention (i.e. screening and matching volunteers, recognition activities, and training and professional development for volunteers.) What is the level of evidence? Evidence: Practice 25

Answer Number/Type of Studies: 0 Quality: NA Level of evidence: No Evidence The applicant does not describe collecting any of its own data, and it is not clear that the two reports described are non-experimental studies. The first study mentioned could be a literature review or an article on best practices not based on research. The Urban Institute study is not described in such a way that we can discern what the study measured or how, so we cannot assume it was a non- experimental study. Evidence: Practice 26

Practice 2 The ABC AmeriCorps program proposes to replicate the Reading for Success AmeriCorps model. In 2011, Reading for Success tracked students’ reading and literacy fluency through the first semester of the school year. More than 1200 kindergarten students were in the study; half were randomly assigned to receive a tutor and half took part in their school’s usual approach to reading instruction without an AmeriCorps tutor. The study found that there was a statistically significant difference between the treatment and control groups, whereby scores increased 14.9 percent each week for the control group and increased at a rate of 19.4 percent for the treatment group, resulting in a 77.2 percent difference between the treatment and control students by the end of the fall. What is the level of evidence? Evidence: Practice 27

Answer Number/Type of Studies: 1 experimental study Quality Alignment of models – Satisfactory Methodological Quality – Satisfactory Recency – Satisfactory Strength of Findings – Satisfactory Level of evidence: Moderate The program cites one experimental study from a program that uses a similar intervention and is rated satisfactory on all four quality indicators. Note: If Reading for Success, the program that completed the study, were to submit this evidence, the level would be strong because an experimental study of the proposed program (as opposed to a similar program) constitutes strong evidence. Evidence: Practice 28

Practice 3 The model for community resiliency that drives the disaster service activities comes from a 2009 paper published in the Journal of Emergency Management that provides an operational framework for incorpoRating resiliency into our infrastructure and stresses that resiliency needs to be planned in advance before systems are damaged. Emergency Management AmeriCorps Program surveyed individuals in our community and found that the top two reasons why individuals were unprepared were “just haven’t thought about it” (27.5%) and “don’t think a disaster is likely (22.8%). Our program intervention is designed to address both of these factors. What level of evidence is it? Evidence: Practice 29

Answer Number and Type of Studies: 0 Quality: NA Level of evidence: Pre-preliminary The applicant conducted research (a survey) to develop its program design and has therefore “collected quantitative…data from beneficiaries that have been used for program improvement…” The 2009 paper is not relevant because it is not a research study. Evidence: Practice 30

Practice 4 Sports Not Drugs (SND) conducted a 20-month longitudinal study of program participants. By the end of the study, 35% of SND students reported in engaging in regular, vigorous activity for an hour or more at least five days a week (an increase of 10%). Two additional internal evaluation studies found that youth who participated in the program showed significantly more knowledge about drugs than youth in the comparison group. One of these studies (using pre and post surveys) further found that over time youth in the program maintained or increased their ability to refuse alcohol, marijuana or cigarettes, while comparison youth showed a decreased ability to refuse these drugs.” What is the level of evidence? Evidence: Practice 31

Answer Number and Type of Study: 3 non-experimental studies Quality Alignment of models: Satisfactory Methodological Quality: Satisfactory Recency: Satisfactory Strength of Findings: Satisfactory Level of evidence: Moderate The program cites three non-experimental studies conducted on the proposed program and was rated satisfactory on all quality indicators. Evidence: Practice 32

Practice 5 In 2009, Fostering Empowerment conducted a survey of 400 of our participants in our foster youth program and compared them to participants in other foster youth programs in our area. We found that participants were 50% more likely to have graduated from high school, independent of the effects of the participants’ age, race or ethnicity. This result was statistically significant at the 95% confidence level. Evidence: Practice 33

Answer Number and Type of Study: 1 non-experimental study Quality Alignment of models: Satisfactory Methodological quality: Satisfactory Recency: Satisfactory Strength of findings: Satisfactory Level of Evidence: Preliminary The study is a non-experimental study with a comparison group. No statistical matching appears to have been done, nor was it longitudinal (i.e. participants tracked over time) so it does not qualify as a quasi- experimental design. Evidence: Practice 34

Past Performance Reviewers assess the following standard using the five-point Rating scale in your handbook: The applicant demonstrates success in solving the identified problem. Past Performance 35

How to Assess Past Performance  The NOFO asked applicants to address past performance in the organizational capability section of the application  Past performance may also be addressed in the evidence section of the application (performance measurement or evaluation results) or elsewhere in the program design section  Past performance is linked to the identified problem - It is not enough for the applicant to have met performance measures in past years; those performance measures must be connected to the problem described in the application  For an applicant that has not met its performance measures in past years, a corrective action plan does demonstrate success in solving the identified problem. Past Performance 36

Next Steps To check for understanding and verify that you have completed this orientation session, please reference the remaining slides for the two-part Assessment of this orientation. 37

Practice Your Understanding  Answer the Assessment Questions at the following link:  Read and assess the norming/sample application and complete the Evidence and Past Performance sections of the review form. Do not include significant strengths, weaknesses, or other narrative sections.  You will receive a review form completed by a CNCS staff member. Compare your assessments to those in the key.  This will complete your verification requirement for this training module. 38 Past Performance: Practice

Practice SELF-ASSESSMENT When you receive the key for the practice activity from CNCS, compare your Ratings to those of the CNCS standard example. If your Rating was more than one Rating higher or lower than the example provided by CNCS, your assessment may not be normed closely enough to CNCS’ expectations for applicants Past Performance: Practice 39