Presentation is loading. Please wait.

Presentation is loading. Please wait.

Laura Pejsa Goff Pejsa & Associates MESI 2014

Similar presentations


Presentation on theme: "Laura Pejsa Goff Pejsa & Associates MESI 2014"— Presentation transcript:

1 Laura Pejsa Goff Pejsa & Associates MESI 2014
Evaluation 101 Laura Pejsa Goff Pejsa & Associates MESI 2014

2 Objectives Gain a greater understanding of evaluation and evaluative thinking Learn about some practical approaches & get familiar with some tools to use Have an opportunity to apply your learning directly to a real world case

3 Session Outline Introductions / Intro to the day
Grounding definitions & terms Understanding “programs” (purpose & logic) Evaluative thinking and the evaluation process Strategies for making evaluation desirable & usable Debrief, questions, & close

4 Metaphors: Your Ideas about Evaluation
Think of one object that represents your ideas and/or feelings about evaluation Prepare to explain your choice Share your with the person sitting next to you and notice common themes Prepare to share your common themes with the group. NOTES TO PRESENTER:  This exercise is adapted from Preskill and Russ-Eft’s activities book (see reference list). The purpose of the activity is to get participants to think about the ideas they hold (often unconsciously) about evaluation. It is designed to draw them into the conversation about evaluation and to consider the implications their ideas have for the practice of evaluation in their organization. You will need paper and markers for each group and a flipchart/whiteboard/paper for recording the large group discussion.  ACTIVITY STEPS Give participants the instructions shown on the slide and distribute paper and markers.  Give participants about 3 minutes to draw their images and label them.  At the end of 3 minutes, ask them to turn to the person sitting next to them. Each person should explain his/her drawing. Give participants approximately 3-5 minutes for this conversation.  Bring the group back together and ask people to comment on themes the pairs noticed and discussed. Keep a running list on flipchart/whiteboard/paper to indicate positive contributions evaluation makes as well as concerns/issues.  You can return to these drawings toward the end of the presentation to see if there are things people would add about the positive contributions evaluation can make, to see if they have shifted perspective at all about the concerns, and to ask if there are questions/concerns still lingering.

5 E-VALU-ation "Value" is the root word of evaluation
Week 1 PP for EdPA 5501/EPsy 5243 E-VALU-ation "Value" is the root word of evaluation Evaluation involves making value judgments, according to many in the field J. A. King

6 the merit, worth (or value) of an object”
Traditional definition: Michael Scriven (from Michael Scriven, 1967, and the earlier Program Evaluation Standards) "The systematic determination of the merit, worth (or value) of an object”

7 Important concepts in this definition
SYSTEMATIC means that evaluators use explicit rules and procedures to make determinations MERIT is the absolute or intrinsic value of an object WORTH is the relative or extrinsic value of an object in a given context

8 An Alternative Definition: Michael Quinn Patton
Systematic collection of information about the activities, characteristics, and results of programs to (1) to make judgments about the program, (2) improve or further develop program effectiveness, (3) inform decisions, and/or (4) increase understanding. Done for and with specific intended primary users for specific, intended uses.

9 Commonalities among definitions
Evaluation is a systematic process Evaluation involves collecting data Evaluation is a process for enhancing knowledge and decision making Evaluation use is implicit or explicit Russ-Eft & Preskill (2009, p. 4)

10 Discussion: Why Do Evaluation?
What are the things we might gain from engaging in evaluation/an evaluative process? Why is it in our interest to do it? Why is it in the interest of the people we serve to do it? What are the benefits? Put in concrete terms, a victim survivor comes to you and asks why she would seek your services and report being sexually assaulted in your jurisdiction. What compelling information do we have to tell her about our results, what benefits we have realized for other victim survivors?

11 From the textbooks… evaluation purposes
Accreditation Accountability Goal attainment Consumer protection Needs assessment Object improvement Understanding or support Social change Decision making

12 One basic distinction… Internal vs. External
INTERNAL evaluation Conducted by program employees Plus side: Knowledge of program Minus side: Potential bias and influence

13 EXTERNAL evaluation Conducted by outsiders, often for a fee
Plus side: Less visible bias Minus side: Outsiders have to gain entrée; have less first-hand knowledge of the program

14 Scriven's classic terms
FORMATIVE evaluation Conducted during the development or delivery of a program Feedback for program improvement

15 Scriven's classic terms
SUMMATIVE evaluation Typically done at the end of a project or project period Often done for other users or for accountability purposes Stake: When the cook tastes the soup, that’s formative. When the guests taste the soup, that’s summative.

16 A new(er) term from Patton
DEVELOPMENTAL evaluation Help develop a program or intervention Evaluators part of the program design team Use systematically collected data

17 What is the evaluation process?
Every evaluation shares similar procedures

18 Patton’s Basics of Evaluation:
What? So what? Now what?

19 General Phases of evaluation planning
Phase Name Question I Object description What are we evaluating? II Context analysis Why are we doing an evaluation? What do we hope to learn? III Evaluation plan How will we conduct the study? We’ve hinted at a process in the last few slides, but to summarize… While no two evaluations are the same, you can expect that your evaluation follows a common process. Set boundaries for the evaluation: An organization must identify the intended audience for an evaluation, describe what is to be evaluated, and determine resources available to support evaluative efforts. Next, an organization will carefully develop the question or questions to be answered by the evaluation. Once questions have been established, an organization will work with an evaluator to plan details that will bring the evaluation to life including: Data collection methods, how the evaluation will be managed (roles and responsibilities, timeline, etc.), data analysis plans, and reporting requirements. Next, the instruments are developed, data is collected, analyzed and interpreted. Finally, recommendations and conclusions are reported to the intended users and these users develop an action plan based on the evaluation.

20 What? Words? Pictures? The key is understanding…

21 “We build the road, and the road builds us.” -Sri Lankan saying
A word about logic models and theories of change… one way to understand a program.

22 Simplest form of a logic model
INPUTS OUTPUTS OUTCOMES in its simplest form, a logic model is a graphic representation that shows the logical relationships between: The resources that go into the program – INPUTS The activities the program undertakes – OUTPUTS The changes or benefits that results – OUTCOMES Results-oriented planning

23 A bit more detail. . . INPUTS OUTPUTS OUTCOMES SO WHAT?
Program investments Activities Partici-pation Short Medium Long-term What we invest What we do Who we reach What results? SO WHAT? What is the VALUE?

24 A simplistic example… Inputs: Outputs OUTCOMES Short

25

26 What does a logic model look like?
Logic model is graphic display Any shape is possible but importance lies in showing expected causal connections Level of detail: simple, complex Multiple models – families of models for multi-level programs; multi-component programs Reinforce that a logic model needs to be: visually engaging, appropriate in its level of detail, easy to understand, reflective of the context in which the program operates.

27 Regardless of format, what do logic models and theories of change have in common?
They show activities linked to outcomes They show relationships/connections that make sense (are logical). Arrows are used to show the connections (the “if-then” relationships) They are (hopefully) understandable They do not and cannot explain everything about a program! Doesn’t matter if it is a flow chart with boxes or arrows or a table

28 The Case

29 The Case: Logic and/or Theory
Draw a Picture… Inputs (what goes in to the program to make it possible?) Outputs (Activities: what do they do? Participation: counts) Outcomes (what do they think will happen?) Short, medium, and long term

30 What can we evaluate? Context Input(s) Process(es) Product(s)
Daniel Stufflebeam

31 The basic inquiry tasks (BIT)
Framing questions Determining an appropriate design Identifying a sample Collecting data Analyzing data and presenting results Interpreting results “Reporting”

32 Back to the Case: What are our questions?
Evaluation Question #1 #2 #3

33 Back to the Case: What do we need to know, and where can we find it?
Evaluation Question Information Needed Information Source #1 #2 #3

34 Possible ways to collect data
Quantitative: Surveys Participant Assessments Cost-benefit Analysis Statistical Analysis of existing program data Some kinds of record and document review Qualitative: Focus Groups Interviews Observations Appreciative inquiry Some kinds of record and document review TO AUDIENCE: There are many ways in which you might collect data as part of the evaluation.  The data collection methods you choose must align with your information needs. Once you know the kinds of information you need to answer your evaluation questions, you can then explore appropriate data collection methods that align with your budget and available expertise and resources. You may realize that your organization lacks the expertise or resources to collect and analyze the necessary data.  In this case, you will need to explore opportunities to hire an outside evaluator.  Provided in this packet is a How-to Checklist for deciding when and how to hire an external evaluator.

35 What are the best methods for your evaluation?
It all goes back to your question(s)… Some data collection methods are better than others at answering your questions Some tools are more appropriate for the audience you need to collect information from or report findings to Each method of collecting data has its advantages and disadvantages (e.g., cost, availability of information, expertise required) TO AUDIENCE: For instance, if your evaluation plan focuses on providing accountability data to a funder regarding the number of participants served and the frequency of service to participants, it doesn't make sense to conduct in-depth interviews and develop detailed case studies of those served by the program.  Instead, you are likely to use surveys or some sort of document review that examines participants at the time of service and frequency of service...something that tracks people in a numerical way. However, if you are interested in knowing more about ways in which your program is failing to meet participant needs, you need to gather evidence of the actual experiences of your participants.  In this case, you would need to rely on focus groups, interviews or possibly surveys to learn more about their needs and the perceived gaps in your service. NOTE TO PRESENTER: Additional examples are provided so that you may pick examples that are appropriate for your audience. You do not need to use them all…select those that make the most sense for your audience! More examples: Case studies are best for answering the hows and whys of a program.  They often describe something in depth. Similarly, cost benefit or cost effectiveness analysis is best used when you want to determine whether benefits or usefulness of a program exceed the costs associated with running it. You can find out more about what data collection methods are appropriate for your needs by reviewing almost any of the resources found on our Further Reading list included in your packet.

36 Back to the Case: How will we find out?
Evaluation Question Information Needed Information Source Methods #1 #2 #3

37 Reminder: Importance of Context

38 Desire & Use How do we make this process palatable, even desirable?
What can we do to make information USE more likely? Ways of sharing and reporting

39 Debrief & Questions What are the most important take-aways from today’s session? What can you apply in your own work? What questions remain for you?


Download ppt "Laura Pejsa Goff Pejsa & Associates MESI 2014"

Similar presentations


Ads by Google