Presentation is loading. Please wait.

Presentation is loading. Please wait.

MODULE II Monitoring and Evaluating Gender-Based Violence Prevention and Mitigation Programs Anastasia J. Gage Melissa Dunn.

Similar presentations


Presentation on theme: "MODULE II Monitoring and Evaluating Gender-Based Violence Prevention and Mitigation Programs Anastasia J. Gage Melissa Dunn."— Presentation transcript:

1 MODULE II Monitoring and Evaluating Gender-Based Violence Prevention and Mitigation Programs Anastasia J. Gage Melissa Dunn

2 Learning Objectives By the end of this session, participants will be able to: differentiate between monitoring and evaluation write goals and smart objectives for gender-based violence (GBV) programs design a logic model for a GBV program identify criteria for indicator selection and information sources for GBV indicators discuss factors to consider when choosing an evaluation design Program managers, donors, and stakeholders often want answers to the following questions: Are the services available? Are the services accessible? Is their quality adequate? Is the target population being reached? Were there improvements in desired outcomes as a result of the program? Monitoring and evaluation activities help to provide answers to these questions. The goal of this module is to strengthen participants’ skills in monitoring and evaluating gender-based violence or GBV prevention and mitigation programs. Specifically, by the end of this session, participants will be able to differentiate between monitoring and evaluation; write goals and smart objectives for GBV programs; design a logic model for a GBV program; identify criteria for indicator selection and information sources for GBV indicators; and discuss factors to consider when choosing an evaluation design.

3 At Least 4 Different Categories of GBV Interventions
Community mobilization Behavior change communication Service delivery Laws and public policy GBV programs generally fall into four categories: community mobilization; behavior change communication; service delivery; and law and public policy. Community mobilization programs tend to mobilize multiple community groups including religious organizations, traditional leaders, schools, sports groups, police stations, and NGOs to establish priorities for addressing GBV and to identify local resources to fight GBV. Community mobilization programs engage, support and empower communities to challenge cultural norms that contribute to overall levels of violence by using participatory approaches. Behavior change communication programs tend to challenge prevailing beliefs and norms that contribute to the acceptability and perpetuation of violence. They also influence individuals’ awareness, attitudes, behavior, and external environment. In addition, BCC program try to reduce survivors’ isolation and create an environment conducive to change by acknowledging the widespread nature of GBV. Service delivery programs generally help to detect cases of violence and provide victims with the care that they need; document information in ways that can be used as evidence in court; and address consequences of GBV for women’s health. Service delivery programs can also provide emergency contraception to women in advance of need, assist women with safety planning, and facilitate women’s access to additional services including legal and social help. Legal and policy programs generally serve to make GBV a legitimate human rights issue. If well implemented and enforced, they can protect women from GBV and hold abusers accountable for their behaviors. Institutional policies provide guidance to staff members and send a message that violence will not be tolerated. Policy efforts also influence the allocation of appropriate resources to GBV prevention/mitigation.

4 M&E Challenges of GBV Programs (1)
Hard to separate contribution of multiple strategies Lack of investment in rigorous evaluation limited evidence of effectiveness Non-standardized GBV definitions and indicators Enhancing comparability of data Climate of urgency around results but behavior change is long-term One of the major challenges of evaluating GBV programs is how to attribute change. Many activities are ongoing so it is difficult to say how much of an observed change is due to a particular activity. GBV programs also tend to be multifaceted, making it necessary to have multiple measures across disciplines and sectors. In general, programs have not invested in rigorous evaluations. Sometimes, this is due to the program design. For example, if a program provides complete coverage, it is impossible to have a control or comparison group. But even in situations where programs are not full coverage, use of strong quantitative evaluation designs is rare. So many programs have limited evidence of their effectiveness. Although many people agree that it is important to prevent and mitigate gender-based violence, there is no consensus on how to define indicators and make them operational. Few programs collect data or measure indicators in a systematic way, making it difficult to compare programs across space and time. An additional challenge is that there is often a climate of urgency around results, whereas gender norms and GBV trends take a relatively long time to change.

5 M&E Challenges of GBV Programs (2)
Sensitive nature of the information and hidden forms of violence (homicide, trafficking, female genital mutilation) Changing violence levels hard to measure and interpret Ethical and methodological issues Monitoring and evaluating GBV programs bring additional challenges. Many indicators for behavior change rely on household surveys to track knowledge, attitudes, and practices related to GBV. The quality of self-reports on GBV victimization or perpetration is suspect, given the sensitive nature of the relevant behaviors. Moreover, there is a culture of silence around GBV and normalization of related behaviors in some settings. Thus, there is considerable potential for under-reporting, especially in cases where violence is hidden, as with female homicide and sex trafficking. If the quality of data is constant over time, under-reporting of GBV will not affect conclusions on the direction of trends, although the absolute levels of GBV will be incorrect. However, if the quality of data varies over time, trends can be distorted. It is common for reporting of GBV to improve over time as programs acknowledge the widespread nature of GBV and send a message that it will not be tolerated. Finally, M&E of GBV programs raises a number of ethical and methodological challenges. For example, care has to be taken to ensure that the victim is not put in any danger by data collection, especially as some of the women and children may be in vulnerable or abusive situations that might be uncovered by data collection. Protocols that handle these situations while also maintaining confidentiality need to be developed ahead of time. Data collection and research teams also have to be carefully selected and need to receive specialized training and on-going support. Active efforts must be made to minimize any possible distress caused by the research.

6 Monitoring versus Evaluation
Tracking changes in program performance over time EVALUATION = Assessing whether objectives have been met Assessing extent to which program is responsible for observed changes Now that we have discussed some of the challenges of monitoring and evaluating GBV programs, let us move to defining basic concepts so that we can all speak the same language. Many M&E experts maintain a strict distinction between monitoring and evaluation. Monitoring is an ongoing, continuous process of data collection. Monitoring involves counting what we are doing. Monitoring involves tracking changes in program performance over time. Monitoring is sometimes referred to as process evaluation. Evaluation is the use of social research methods to systematically investigate how well program activities have met expected objectives. Evaluation requires a special study design. Evaluation sometimes requires a control or comparison group.

7 Illustration of Program Monitoring
PROGRAM INDICATOR An illustration of program monitoring over time could look like this. The program indicator being measured on the Y-axis could be any element of the program that needs tracking , such as the number of community events carried out with anti-GBV messages. The X-axis is the time over which a program runs. Each vertical red line in the graph represents a point at which data are collected to measure the indicator. As this graph illustrates, monitoring requires data to construct indicators for your outcomes of interest at multiple points while the program is ongoing, both to track incremental program performance and to determine if activities need adjustment during the intervention in order to improve desired outcomes. At a minimum, the program must have all data necessary to calculate your indicator before/near the start of the related intervention and at (or after) the end of the intervention. Monitoring addresses the following questions: Are activities carried out as planned? What services are provided, to whom, when, how often, for how long, and in what context? Are the services accessible? Is their quality adequate? Is the target population being reached? TIME  Program start Program end

8 Illustration of Program Impact
With program PROGRAM OUTCOME CHANGE IN Without program Program impact Evaluation addresses the following questions: What outcomes are observed? Does the program make a difference? To what extent is the program responsible for the observed changes? The latter question is more directed at measuring program impact. Ideally, to measure program impact, baseline data are collected at the start of the program and another phase of data collection occurs again at the end of a program, rather than at repeated intervals while the program is being implemented. At the same time, baseline and follow-up measurements are made in areas without the program. Attributing changes in outcomes to a particular program/intervention requires one to rule out other possible explanations. We need to control for external or confounding factors that may account for the results. Therefore, extensive knowledge of sampling and statistical analysis is required for measuring program impact. The most rigorous study designs for impact measurement involve a randomly-assigned control group. Determining the difference in outcome between areas with the program and areas without the program is an analytical exercise rather than a direct measurement. There are some similarities between monitoring and evaluation. Both require knowledge of baseline values and final values, often with an interim measurement during the project. However, evaluation differs crucially from monitoring in that the goal of evaluation is to determine whether desired outcomes have been achieved and, in the case of impact evaluation, how much of the change in outcome is due to the program or intervention. Impact evaluation is fundamentally an analytical exercise to help decision-makers understand when, how, and to what extent a program has achieved its desired outcomes. However, relatively few programs go as far as establishing cause-and-effect between the program and the change. TIME  Program start Program end

9 Activity 1: Is It Monitoring or Evaluation?
The Ministry of Women’s Affairs wants to know if programs carried out in Province A are reducing the prevalence of intimate partner violence (IPV). USAID wants to know how many villages in Region B have been reached with anti-GBV messages by your program this year. A country director is interested in finding out if the care provided to rape victims in public clinics meets national standards of quality.

10 Purpose of Monitoring and Evaluation
The purpose of monitoring and evaluation is to measure program effectiveness. The purpose of monitoring and evaluation is to measure program effectiveness. Both are essential in helping managers and implementers acquire the information and understanding they need to make informed decisions about program operations. Monitoring and evaluation can be used to demonstrate to planners and other decision-makers that program efforts have truly had measurable impact on the outcomes of interest. Monitoring and evaluation also helps you make the most effective and efficient use of resources. It helps you determine exactly where your program is right on track and where you need to consider making corrections. And M&E helps you come to objective conclusions regarding the extent to which your program can be judged a “success.” In other words, monitoring and evaluation together provide data necessary to guide strategic planning, to design and implement programs and projects, and to allocate and re-allocate resources in better ways.

11 Monitoring and Evaluation Questions
Were resources made available to the program in the quantities and at the times specified by the program plan? Were the program activities carried out as planned? Which program activities were more effective and which were less effective? Did the expected changes occur? How much change occurred? Can improved health outcomes be attributed to program efforts? Did the target population benefit from the program and at what cost? M&E questions help focus and provide structure to monitoring and evaluation activities, help guide M&E planning, facilitate decision making about what tools and methods to use, and inform discussions about how M&E results can be used to improve health programs.

12 IMPLEMENTATION/MONITORING
M&E Across Program Life Cycle ASSESSMENT What is the nature of the (health) problem? 1 EVALUATION How Do I know that the strategy is working? How do I judge if the intervention is making a difference? STRATEGIC PLANNING What primary objectives should my program pursue to address this problem? 2 5 4 3 3 IMPLEMENTATION/MONITORING How do I know the activities are being implemented as designed? How much does implementation vary from site to site? How can be program become more efficient or effective? At what points of a program lifecycle do you do M&E? M&E occurs at all stages of the program life cycle. In phase one of the program life cycle, sponsors and program managers identify the health needs or problems to be addressed by the program. Phase two is the strategic planning stage. Sponsors and program managers are considering the goals they hope to accomplish through program activities. Information is needed to make decisions about how to allocate money and effort in order to meet identified program needs. Phase three is the program design stage. The program may be designed from scratch or an already existing program may be revised or adapted to meet the specified goals and objectives. At this stage, information is needed to address the question: What strategy or interventions should the program use to achieve these objectives? M&E activities may include pilot testing and assessing the effectiveness and feasibility of alternative methods of service delivery. Phase four is the monitoring stage. The program staff members are trying to operationalize the program, adapt it as necessary to a particular setting, resolve problems that arise, and get the program to a point where it is running smoothly. Information is needed that describes how the program is operating and how the program can be improved: To what extent has the program been implemented as designed? How much does implementation vary from site to site? How can the program become more efficient or effective? Phase five is the evaluation stage. The program has become established and information is needed on the extent to which a program’s highest priority goals have been achieved. For M&E to be successful, strategic program planning and developing an M&E strategy should go hand in hand. DESIGN What strategy, interventions and approaches should my program use to achieve these priorities?

13 Activity 1 Discussion: Key Elements of M&E Plan
Do participants’ programs have M&E plans? What are the components of the plans? What sections do the plans include? Distribute Handout 1: Template for M&E Plan What is missing from their programs’ M&E plans? What additional components do their programs’ M&E plans include?

14 Activity 2: Identifying the Problem
Organize into 4 groups representing the different categories of GBV intervention: community mobilization behavior change service delivery laws and public policy Review Handout 2: Gender-Based Violence Case Study What is the problem from your group’s perspective? What are 2 or 3 barriers your GBV intervention group might face when trying to help women like Marie?

15 Program Goals A goal is a broad statement of a desired, long-term outcome of a program. Goals represent general big picture statements of desired results. Examples: To improve the lives of women who experience gender-based violence To decrease injury and mortality due to gender-based violence To eliminate gender-based violence We hear a lot of talk about how important it is to have goals. But what exactly is a goal? A goal is defined as a broad statement of a desired long-term outcome of a program, that is a broad statement of what a program hopes to accomplish. A goal is an end the program strives to attain. Setting goals is a way to focus your attention on what you want in the future. You can also think of a goal as a general statement of an intended outcome. So how do you recognize a goal when you see one? First, there is no single clear-cut performance that will indicate whether the goal has been met. If there is a single behavior that will indicate the presence of a long-term outcome, you are not dealing with a goal. Examples of goals are to: improve the lives of women who experience gender-based violence decrease injury and mortality due to gender-based violence eliminate gender-based violence Here are a few tips for writing program goals. Each goal should contain only one idea. Keep goal statements separate from statements of how goals are to be attained. Separate goals from indicators. The two are related but they are not the same thing. Keep the goal focused, clear and crisp. The goal statement should be understandable. Avoid difficult grammatical construction and professional jargon. The general public should be able to make sense of the goal.

16 Program Objectives Objectives are statements of desired, specific, realistic and measurable program results “SMART” acronym: Specific: identifies concrete events or actions that will take place Measurable: quantifies the amount of resources, activity, or change to be expended and achieved Appropriate: logically relates to the overall problem statement and desired effects of the program Realistic: provides a realistic dimension that can be achieved with the available resources and plans for implementation Time-bound: specifies a time within which the objective will be achieved An objective is a statement of a desired, specific, realistic and measurable program result that must be attained in order to accomplish a particular program goal. Objectives provide direction for decision making and criteria against which program results or outcomes are measured. SMART is an acronym used to help you define and set effective objectives. Specific – Ensure that the events, actions, or outcomes are stated in a clearly-defined manner. Be precise about what you intend to achieve. If an objective is specific or precise, there is only one way in which it can be interpreted. Measurable – Ensure that your objective can be quantified. Quantifying your objectives can help you to focus on what is expected. It can also aid you to monitor and assess actual performance against the set objectives. If an objective cannot be measured, it will be difficult for you to demonstrate that you have accomplished the objective. Appropriate – This means that the objective is important for meeting the long-term goals of the program or that it is important to the organization or target population. Realistic – Make sure that the objective can be achieved with the resources, time and personnel available. Time-bound – Although many programs have good intentions, without a time definition, the objective lacks commitment. Time-bound means clearly stating when the objective will be achieved. Source: GAP 2003

17 How to Write Program Objectives
[action] The [specify what knowledge, attitudes, skills, behaviors] Among [specific population or segment] From-to [from baseline to desired level] or by [ x percent ] or to [specific level] By [time frame] A properly stated objective is action-oriented. It starts with the word “to” and is followed by an action verb. Objectives address questions of “what”, “who”, “how much,” and “when” but not “why” or “how.” In other words, an objective is the aim of an action. The objective implies a specific direction for the action to take or a specific quality of work to be accomplished. The objective reflects desired outcomes for specific individuals, groups or organizations within a given period of time.

18 Activity 3: Is it a Goal or an Objective?
To reduce GBV To increase the % of men and women in beneficiary population who believe that violence is not an acceptable way of dealing with conflict from 40% in 2002 to 80% by 2007 To increase the number of domestic violence clients recommended for counseling who utilize counseling services by 30%

19 Activity 4: Defining Program Goals and Objectives
Return to small groups from previous activity Develop one goal and two objectives for a program that will address problems identified in Handout 2 (GBV Case Study) After 15 minutes, a member of each group will share the program goal and objectives

20 Developing Program Strategies
Role of a Conceptual Framework

21 Conceptual Frameworks
Diagram that identifies and illustrates the relationships between all relevant systemic, organizational, individual, or other salient factors that may influence program/project operation and the successful achievement of program or project goals. M&E purpose: To show where program fits into wider context To clarify assumptions about causal relationships To show how program components will operate to influence outcomes To guide identification of indicators To guide impact analysis (causal pathways) Speaker Notes Conceptual frameworks are sometimes called “research” or “theoretical” frameworks. A conceptual framework is a useful tool for identifying and illustrating a wide variety of factors and relationships that may affect program success. Conceptual frameworks take a broad view of the program itself in order to clarify the relationship of its activities and its main goals to the context in which it operates. The design of the conceptual framework should show the interrelationships between all factors that are relevant to achieving the program’s goals. These factors can be systems, organizations, government or institutional policies, infrastructure, population characteristics, or other features of the operational landscape that may help or hinder the program’s success. Constructing a conceptual framework thus clarifies the complete context affecting the outcome of an program or intervention. Constructing a conceptual framework also clarifies your assumptions about the causal relationships between significant features of the program context, clarifying aspects that your planned intervention may affect as well as other factors beyond your control. Identifying the variables that factor into program performance, and organizing the explicit ways they interact with each other sets the stage for outlining the results you can reasonably expect from your program activities. Clarifying all of these issues is a critical step toward designing valid measures for analyzing or evaluating the success of the program’s interventions. Additional Background Conceptual frameworks are used in the sciences to select key variables for analysis. By constructing this kind of analytical framework as the foundation within which your program will design, plan, and implement the monitoring and evaluation of program performance, real possibilities and limitations become clearer to everyone involved.

22 Illustrative Conceptual Model for Intimate Partner Violence (IPV)
Individual perpetrator Society Community Relationship Society norms laws notions of masculinity gender roles being male witnessing marital violence as child being abused as child absent or rejecting father alcohol use poverty unemployment family isolation community acceptance of violence marital conflict male control of wealth male control of decision making in family Source: Heise 1998

23 Activity 5: Developing Program Activities
Return to small groups from previous activity Develop 3 program activities for the goal and objectives that the group had formulated earlier to address the problems identified in the GBV case study In 15 minutes, a member of each group will share the program strategies and activities

24 Logic Models

25 What is a Logic Model? Logic models are diagrams connecting program inputs to processes, outputs, outcome, and impact as they relate to a specific problem or situation. Logic models show what resources your program will need to accomplish its goals, what your program will do, and what it hopes to achieve, emphasizing the links between all these things.

26 Logic Model Components
Input – Resources used in a program (e.g., money, staff, curricula, and materials) Process – Activities conducted by program to accomplish its objectives (e.g., outreach, advocacy, training, BCC, etc.) Output – Direct results of program activities (e.g., people trained, materials distributed, couples counseled, etc.) Outcome – Program results at population level (e.g., knowledge, attitudes, behavior, coverage) Impact – Long-term change in morbidity, mortality, and fertility Generally, logic models consist of: Input: The various resources that go into a program. For example, what kind of staff, equipment, materials and funding are at your disposal? Process: Activities are the actual interventions that take place as part of the program. For example, conduct an educational campaign on women’s rights. Output: The direct product of a program’s activities. For example, the number of educational packets you sent to various sites. Outcome: The short-term or intermediate results of the program at the population level. A short-term outcome example could be increased awareness of organizations that address violence against women and girls (or VAW/G). An intermediate outcome example might be decreased prevalence of sexual harassment in your community in the past year. Impact: The long-term outcome of the program; for example, reduced STI incidence among men in your community.

27 Activity 6: Identifying Logic Model Components
Distribute Handout 5 Each of the scenarios in the handout corresponds to one of the five components in the logic model: Input, process, output, outcome, and impact As a group, decide which component the scenario illustrates Distribute Handout 5 Scenario #1: The number of stories printed, the number of newspapers in which they were printed, where they were printed (as well each newspaper’s circulation and readership demographics). Scenario #2: In the next six months, we expect to see an increase of 25 percent in the proportion of youth and adults who report the domestic violence helpline telephone number. Scenario #3: Your coalition has recruited a staff person to help promote enforcement of existing local laws on domestic violence, to collaborate with other coalitions and community organizations, and to gather relevant, comparable data on gender-based violence. Scenario #4: The newly recruited staff person creates a working group to foster collaboration among community organizations working on GBV prevention and mitigation. Scenario #5: You want to promote public awareness of domestic violence and women’s rights, so you collaborate with local newspaper reporters to develop a series of stories on GBV. Scenario #6: Through local surveys and the use of comparable data, you learn that a lower proportion of young men in your community believe that a man has the right to beat his wife. Scenario #7: An increase in the percentage of domestic violence victims who made contact with a VAW/G organization. Scenario #8: An overall decrease in the prevalence of domestic violence in your community.

28 Handout 6: Illustrative Logic Model for Provider Training Program
INPUT Human and financial resources to develop training materials & implement training program PROCESS Develop GBV clinical training curriculum Conduct training-of-trainers workshops Conduct GBV training for providers OUTPUT Providers trained in GBV Improved provider attitudes toward GBV Increased provider ability to identify, counsel, care for, and refer GBV victims OUTCOME Increased awareness of GBV as a health issue Increased disclosure of GBV Increased knowledge and utilization of GBV services IMPACT Improved health and safety of GBV victims Distribute Handout 6 This logic model presents a straightforward view of a project designed to improve providers’ knowledge, attitudes and practices and increase providers’ awareness of GBV as a public health problem and a violation of human rights. As you can see, a series of “if-then” relationships connect the components of the logic model: if resources are available to the program, then program activities can be implemented; if program activities are implemented successfully, then certain outputs and outcomes can be expected. The logic model does not try to account for all of the factors that may be influencing a program’s operation and results like a conceptual framework. Instead, the logic model focuses on the program’s inputs, activities, and results. This narrow focus assists program managers and M&E planners as they clarify the direct relationships among elements of particular interest within a specific program.

29 Role of Logic Model Program planning Program management Communication
Helps one think through program strategy – where you are and where you want to be Program management Helps one track and monitor operations to better manage results Communication Shows stakeholders at a glance what the program is doing and what it expects to achieve Program Planning. The logic model is a valuable tool for program planning and development. The logic model structure helps you think through your program strategy—to help clarify where you are and where you want to be. Program Management. Because it "connects the dots" between resources, activities, and outcomes, a logic model can be the basis for developing a more detailed management plan. Using data collection and an evaluation plan, the logic model helps you track and monitor operations to better manage results. It can serve as the foundation for creating budgets and work plans. Communication. A well-built logic model is a powerful communications tool. It can show stakeholders at a glance what a program is doing (activities) and what it is achieving (outcomes), emphasizing the link between the two.

30 Role of Logic Model Consensus building Fundraising
Promotes common understanding about what program is, how it works, and what it is trying to achieve Fundraising Can structure and streamline grant writing Monitoring and evaluation Provides thorough understanding of what resources a program has to work with, what it is doing, and what it hopes to achieve. Consensus-Building. Developing a logic model builds common understanding and promotes buy-in among both internal and external stakeholders about what a program is, how it works, and what it is trying to achieve. Fundraising. A sound logic model demonstrates to funders that you have purposefully identified what your program will do, what it hopes to achieve, and what resources you will need to accomplish your work. It can also help structure and streamline grant writing. Monitoring and evaluation: The cornerstone of effective evaluation is a thorough understanding of a program: what resources it has to work with, what it is doing, what it hopes to achieve, for whom, and when. In conducting an evaluation, it is tempting to focus most of your attention on data collection. However, your evaluation efforts will be more effective if you start with a logic model. Going through the logic model process will help ensure that your evaluation will yield relevant, useful information.

31 Activity 7: Developing a Logic Model
Return to small groups from previous activities Develop a logic model for two of the activities of your program After 20 minutes, a member of each group will share the logic model As you draft each component of the logic model, consider the “if-then” relationship between the components. If you cannot make a connection between each component of the logic model, you should identify the gaps and adjust your work. This may mean revising some of your activities to ensure that you are able to achieve your outcomes, or revising intended outcomes to be feasible with available resources. Source: The Innovation Network. Logic Model Workbook.

32 Results Frameworks

33 Results Framework Purposes Other terms used
Diagram that identifies steps or levels of results and illustrates the causal relationships linking all levels of a program’s objectives Purposes Provides a clarified focus on the causal relationships that connect incremental achievement of results to the comprehensive program impact Clarifies project/program mechanics and factors’ relationships that suggest ways and means of objectively measuring the achievement of desired ends Other terms used Strategic framework A results or strategic framework is a useful tool for identifying and illustrating the focal points of a project for monitoring and evaluation: the results. A results framework diagrams the relationships between the incremental results of the key activities, all the way up to the overall objective or goal of the intervention. This is the type of framework used by USAID in what the agency calls Performance Monitoring Plans. Developing a framework focused on program results clarifies the points at which results can be monitored and evaluated. In addition, results frameworks clearly depict the causal relationships that the project design assumes will connect. For instance, the clinical training of providers will connect with the provision of a higher quality of care at facilities where (more) trained providers work, and thus ultimately lead to an improvement in health status or health outcomes for the targeted population. The effectiveness of the project’s related activities can be measured at each step along the way.

34 SO1: Increased Utilization of Reproductive Health Services
Illustrative Results Frameworks SO1: Increased Utilization of Reproductive Health Services IR1 Strengthened sustainability of GBV programs IR2 Expansion of high-quality health services for GBV victims in the public and private sectors IR1.1 Improved policy environment for GBV prevention and response IR2.1 Increased availability of GBV screening and referral IR2.2 Improved provider attitude towards GBV Distribute Handout 7. Here is an example from a donor program. The framework has three levels. At the top is the strategic objective (SO), which describes the “what”: what service, commodity or practice do we want beneficiaries to use? In this case, we want beneficiaries to increase the use made of reproductive health services. The middle tier of the framework includes two intermediate results, which are essential steps to be achieved through the chosen delivery strategies: the “how”, i.e., How do we propose to increase the utilization of reproductive health services? The two IRs are: (1) strengthened sustainability of GBV programs and (2) expansion of high quality health services for GBV victims in the public and private sectors. The third tier consists of the sub-IRs, which are essential in achieving the IRs to which they are linked. For example, an improved policy environment and strengthened NGO advocacy for GBV are important steps for strengthening the sustainability of GBV programs. Similarly, the program will expand high quality health services for GBV victims in the public and private sector through increasing the availability of GBV screening and referral, improving provider attitudes toward GBV, and improving provider competence in GBV screening, referral, care and counseling. IR1.2 Strengthened NGO advocacy for GBV prevention IR2.3 Improved provider competence in GBV screening, referral, care, & counseling

35 Activity 8: Develop Monitoring and Evaluation Questions
Return to your previous group activities where you developed your program goals and objectives and logic model Come up with at least 1 monitoring question and 1 evaluation question After 10 minutes, a member of the group will share the M&E questions

36 Selecting Indicators

37 What is an Indicator? An indicator is a specific, observable, and measurable characteristic that can be used to show changes or progress a program is making toward achieving a specific outcome. Once you have written measurable objectives and developed a logic model for your M&E plan, you can define the outcomes you want to measure. Measuring outputs and short-term and intermediate outcomes can provide valuable information about how the program is functioning and whether activities implemented are accomplishing what they were intended to do. Long-term outcomes are often difficult to measure on an annual basis given that it takes a long time to demonstrate change. The next step involves selecting the indicators that you will use to monitor and evaluate your program. What is an indicator? An indicator is a specific, observable and measurable characteristic that can be used to show whether a program is making changes toward achieving a specific outcome. There should be at least one indicator for each outcome. The indicator should be focused, clear and specific. The change measured by the indicator should represent progress (hopefully) that the program has made.

38 Characteristics of Good Indicators
Valid: accurate measure of a behavior, practice or task Reliable: consistently measurable in the same way by different observers Precise: operationally defined in clear terms Measurable: quantifiable using available tools and methods Timely: provides a measurement at time intervals relevant and appropriate in terms of program goals and activities Programmatically important: linked to the program or to achieving the program objectives that are needed for public health impact What makes a good indicator? Fundamentally, good indicators must be valid and reliable measures of a program’s results. The other desirable characteristics listed here all serve in a sense as aids that help guide the design of indicators and metrics toward this ideal or goal of valid, reliable indicators. Valid: An indicator is valid when it is an accurate measurement of the output or outcome of the program. Reliable: An indicator is reliable when it minimizes measurement error, that is, when it is possible to measure it consistently over time, regardless of the observer or respondent. Precise: Indicators should be operationalized with clear, well-specified definitions. Timely: Indicators should be measured at appropriate intervals relevant to the program goals and activities. Programmatically important: Indicators should link to program activities and objectives. Additional Background These technical issues will be covered as ideals toward which to strive. Later slides could discuss a number of caveats and trade-offs often necessary in terms of the pragmatic management of programmatic M&E.

39 Characteristics of Good Indicators (cont.)
Indicators do not specify a particular level of achievement — the words “improved,” “increased,” or “decreased” do not belong in an indicator.

40 Common Indicator Metrics
Counts no. of legal and service organizations for violence against women and girls (VAW/G) survivors no. women and children using VAW/G social welfare services Calculations: percentages, rates, ratios % of health units with at least one service provider trained to care for and refer survivors % of women aged who have ever experienced physical violence from an intimate partner Index, composite measures gender-equitable men (GEM) scale sexual relationship power scale Thresholds presence, absence; pre-determined level or standard Perhaps the most important part of what comprises an indicator is the metric. The metric is the precise explanation of the data and the calculation that will give the measurement or value of the indicator. Defining good metrics is absolutely crucial to the usefulness of any M&E plan. A good metric clarifies the single dimension of the result that is being measured by the indicator. A good metric does this in such a way that each value measured for the indicator using a given data source is exactly comparable to values measured at another time using the same data source. Indicators can have a number of types of metrics. They can be simple counts of things (for example, the number of clients screened for GBV or the number of VAW/G pamphlets distributed), or they can involve calculations (for example, the proportion of facilities with a provider trained in GBV screening or the percentage of women aged who experienced intimate partner violence in the past 12 months). Indicator metrics can also be more complex, such as the gender-equitable men scale or the sexual relationship power scale. These are composite indicators which bring together several other indicators into one index. Sometimes, indicators can be in the form of a threshold showing the presence or absence of something, for example, the existence of a national plan of action to address GBV. Sometimes, indicators can be expressed in terms of a pre-determined level or standard, for example, 80% of communities (villages/towns) in a district have a community action plan to address violence against women. These are just some examples of types of metrics used for indicators.

41 Common Challenges in Indicator Selection
Choosing an indicator that program activities cannot affect Choosing an indicator that is too vague Indicators that do not currently exist and cannot realistically be collected Selecting an indicator that does not accurately represent the desired outcome Too many indicators

42 How Many Indicators Are Enough?
At least one or two indicators per key activity or result (ideally, from different data sources) At least one indicator for every core activity (e.g., training, BCC) No more than 8-10 indicators per area of significant program focus Use a mix of data collection strategies/source A frequent question is: How many indicators should my program have? The best answer is: It depends. What does it depend upon? Complexity of goals; costs of data collection and analysis; anticipated benefits and practical utilization of M&E results at different levels. A reasonable rule of thumb might be one or two indicators per key activity or result, but that depends on how finely detailed your results are. You should definitely have at least one or two indicators for every significant activity – remember, M&E’s purpose is to monitor performance and evaluate program outcomes and impact! However, there is a tendency to go into overkill on indicators – again keep in mind that the focus of your program is your program. M&E that is not going to be fed back into program management or otherwise used to improve performance, effectiveness, or efficiency, is not a very sound use of program resources. It is wise to vary the data sources used for indicators, either secondary sources or your own data collection efforts, especially for key results. Any number of unexpected events can occur and disrupt an M&E plan such as budget cuts, delayed surveys, or civil war; so diversifying data sources is a good strategy to ensure some indicators can be tracked over the life of the project.

43 Factors to Consider When Selecting Indicators
Logic/link to framework Programmatic needs/information for decision making Data availability Resources External requirements (government, donor, headquarters) Standardized indicators (if available) In an ideal world, indicators judged to be the highest quality and most useful would be the ones selected and used to monitor and evaluate program activities. However, in the real world and in field settings, many other factors may intervene. Links to the program activities, as shown through frameworks are important, as are the needs of the program for decision making. Ideal indicators may not be practical; the feasibility of certain indicator designs can be constrained by data availability, financial and human resources. Requirement and needs of donors, the government, organization headquarters and others may need to be given priority. Some examples of these considerations are: Availability of data: Some data may be considered ‘privileged’ information by agencies, projects, or government officials. Data may be available only on aggregated levels or already calculated into indicators that may not be the ideal indicators for your program or activities. Resources: Ideal indicators might require collecting data to calculate an unknown denominator, or national data to compare with project area data, or tracking lifetime statistics for an affected and/or control population, etc. The costs of collecting all of the appropriate data for ideal indicators typically is prohibitive. Human resources and technical skills, particularly for evaluation, may be a constraint as well. Programmatic and external requirements: Indicators may be imposed from above by those not trained in M&E techniques. Reporting schedules may not be synchronized (e.g. fiscal year vs. reporting year). Different stakeholders’ priorities may diverge. Standardized indicators should be used if available. Discussion: Ask: In your experience, what factors other than desire to select the best and most appropriate indicators, have affected the selection of M&E indicators? Give specific examples.

44 Activity 9: Selecting Indicators
Return to small groups from previous activities where you developed goals, objectives, activities, and M&E questions Select 3 indicators that your group might use to measure progress towards your program goals and objectives Into which logic model component does each indicator fall? After 15 minutes, a member of each group will share the indicators

45 Activity 10: Selecting Indicators Cont’d
As a class, look at the indicators that the groups selected in the previous activity Discuss the following: To what extent are the indicators logically linked to the group’s M&E framework? Do the proposed indicators share the characteristics of good indicators? How will the indicators be used for program decision-making and for what decisions? Are data available to measure the indicators? Can the indicators be realistically collected given available resources? Are there government or donor requirements for measuring the proposed indicators?

46 Where to Find Standardized Indicators for GBV Programs
Bloom, Shelah S. Violence Against Women and Girls: A Compendium of Monitoring and Evaluation Indicators (MS-08-30). Chapel Hill, NC: MEASURE Evaluation, Carolina Population Center, University of North Carolina and Chapel Hill; 2008. United Nations Division for the Advancement of Women, United Nations Economic Commission for Europe, United Nations Statistical Division. Indicators to Measure Violence against Women. Report of the Expert Group Meeting, 8 to 10 October Geneva: United Nations Human Rights Council; 2008. Report of the special rapporteur on violence against women, its causes and consequences, by Yakin Ertürk. Indicators on violence against women and state response. Advance edited version 28 January 2008. Where possible, indicators should be structured using comparable units, denominators, and in other ways that will enable increased understanding of program effectiveness across different population groups or interventions. However, until recently, there was little consensus on how to define and indicators for GBV prevention and mitigation programs, and make the indicators operational. In 2008, MEASURE Evaluation developed a set of monitoring and evaluation indicators for program managers, organizations, and policy-makers who are working to address violence against women and girls at the individual, community, district/provincial and national levels in developing countries. This report can be found on the MEASURE Evaluation publications Web site. There has also been a United Nations initiative to develop standardized indicators to measure violence against women at the state level. The MEASURE Evaluation indicators cover the following areas: The magnitude and characteristics of five types of VAW/G — skewed sex ratios, intimate partner violence, violence from someone other than an intimate partner, female genital cutting/mutilation, and child marriage. Four VAW/G programmatic sectors — health, education, justice/security, social welfare. Under-documented forms of VAW/G and emerging areas — humanitarian emergencies, female homicide. Prevention programs in three areas — youth, community mobilization, working with men and boys.

47 Illustrative Indicators: Community Mobilization
Proportion of people who would assist a woman being beaten by her husband or partner Proportion of people who say that wife beating is an acceptable way for husbands to discipline their wives Proportion of people who agree that rape can take place between a man and woman who are married Proportion of youth-serving organizations that include training for beneficiaries on sexual and physical VAW/G Here are a few examples of community mobilization indicators taken from Violence Against Women and Girls: A Compendium of Monitoring and Evaluation Indicators.

48 Illustrative Indicators: Behavior Change Communication
Proportion of people who have been exposed to VAW/G prevention messages Proportion of girls who say that they would be willing to report any experience of unwanted sexual activity Proportion of girls that feel able to say no to sexual activity Proportion of individuals who know any of the legal sanctions for VAW/G Here are a few examples of indicators for programs that focus on communication for behavior change taken from Violence Against Women and Girls: A Compendium of Monitoring and Evaluation Indicators.

49 Illustrative Indicators: Service Delivery
Proportion of health units that have documented and adopted a protocol for the clinical management of VAW/G survivors Proportion of health units with at least one service provider trained to care for and refer VAW/G survivors Proportion of women who were asked about physical and sexual violence during a visit to the health unit Proportion of rape survivors who received comprehensive care Here are a few examples of service-delivery-related GBV indicators taken from Violence Against Women and Girls: A Compendium of Monitoring and Evaluation Indicators.

50 Illustrative Indicators: Law and Public Policy
Proportion of law enforcement units following a nationally established protocol for VAW/G complaints No. of law enforcement professionals trained to respond to incidents of VAW/G according to an established protocol No. of VAW/G complaints reported to the police Proportion of VAW/G cases that were investigated by the police Here are a few examples of law and public policy GBV-relevant indicators taken from Violence Against Women and Girls: A Compendium of Monitoring and Evaluation Indicators.

51 Measuring Indicators Indicator 1: No. of service providers trained (in the past year or other period) to identify, refer, and care for VAW/G survivors “providers” include any clinician providing direct clinical services to clients at public health facilities “trained” refers to attending every day of a five-day training course on VAW/G It is important to understand what goes into an indicator before we move on to discussing how they are selected or developed. Here is an example. The indicator is the number of service providers trained (in the past year or other period) to identify, refer, and care for VAW/G survivors. This indicator is a count. The length of the time period would depend on how often the program holds trainings. It is important to note that “providers” are defined as any clinician (meaning doctor, nurse, or medical assistant) providing direct clinical services to clients at public health facilities. Note that any providers working at private facilities are not to be included in this indicator. If the term were not defined, the indicator could be counted or interpreted differently than intended. Also note that in order to be counted for this indicator, the clinician has to have attended all five days of the training course. This means that this information must be collected at the time of the training workshops.

52 Measuring Indicators Indicator 2: % of women who were asked about physical and sexual violence during a visit to a health unit Numerator: Number of women who were asked, during the course of their service provision at the health unit, about any violence that had ever occurred, either physical or sexual, in the geographic area of study (nation, province, state, community) Denominator: If record review, total number of women’s records that were reviewed at the health unit If exit interview, total number of women interviewed In the second example, the indicator is the percentage of women who were asked about physical and sexual violence during a visit to a health unit. To calculate this indicator, you need a numerator and a denominator. (Remember that a numerator is divided by a denominator to carry out a calculation.) The numerator is the number of women who were asked, during the course of their service provision at the unit, about any violence that had ever occurred, either physical or sexual, in the geographic area of study (nation, province, state, community). Note that the numerator does not specify whether the health unit is public or private. If this indicator is being measured with a medical record review, all women’s charts which noted that they were asked if they experienced any physical and sexual violence by a provider would be entered into the numerator. If the indicator is being measured during an exit interview, all women leaving the clinic would be asked if a provider asked them if they had ever experienced any physical or sexual violence. All women answering yes would be entered into the numerator. If the indicator is measured through a clinical record review, the denominator is the total number of women’s records that were reviewed at the health unit. If the indicator is measured through an exit interview of women leaving a health facility, the denominator is the total number of women interviewed.

53 Activity 11: Operational Definition of Indicators
Distribute Handout 8 Organize into 4 groups Write an operational definition for one of the following indicators: intimate partner physical violence appropriate care for rape survivors support-seeking behavior of GBV victims gender-norm related attitudes that reinforce VAW/G After 15 minutes, a member of each group will share the operational definitions An indicator should be defined in precise, unambiguous terms that clearly describes exactly what is being measured. Where practical, the indicator should give a relatively good idea of the data required and the population among which the indicator is measured. This seems like an obviously desirable attribute of indicators, but deserves emphasis, since many indicators in common use are not well-defined in clear terms, or at least include terminology that could be improved to add greater precision. For instance, “intimate partner physical violence,” “appropriate care for VAW/G survivors,” and “gender norm-related attitudes that reinforce VAW/G” can all mean or imply different things under different circumstances. The more you can spell out in the indicator, the less room there will be for later confusion or complications.

54 Measuring Composite Indicators
Example: How to measure “attitudes toward GBV”? What question do you ask? How do you know any one question is a good representation of general attitude? How do you scale each question (agree/disagree; strongly disagree to strongly agree; yes/no)? How do you combine answers to the questions to come up with one indicator? What value or weight do you assign to any one question on GBV attitude in the survey? A mathematical combination of a set of indicators is most often called an "index" or a "composite indicator":   Why would one use an index or a composite indicator? Let’s us begin by asking ourselves: How do we measure attitudes toward GBV? What question do we ask? Using only a single question or statement to assess attitudes is often not effective in gaining valid or reliable responses. In order to determine what an individual believes or feels about GBV, composite indicators or indices are often constructed by bringing together responses from several questions. There are many advantages to composite indicators. They can help summarize complex or multi-dimensional issues in view of supporting decision-makers. Tracking a trend in a composite indicator can be easier than trying to find a trend in many separate indicators. Composite indicators can help attract public interest by providing a summary figure with which to compare performance across areas and progress over time. Composite indicators could help to reduce the size of a list of indicators. However, composite indicators are difficult to construct. How do you scale each question? How do you combine answers to the questions to come up with one indicator. What value or weight do you assign to any one question on GBV attitude in the survey? The construction of composite indicators involves stages where judgement has to be made: the selection of sub-indicators, choice of mathematical approach, weighting indicators and treatment of missing values etc. These judgements should be transparent and based on sound statistical principles. Composite indicators increase the quantity of data needed because data are required for all the sub-indicators. It is important to note that composite indicators can lead to distorted outcomes and may send misleading, non-robust policy messages if they are poorly constructed or misinterpreted. It is very easy to construct a poor measure of attitudes.

55 Example of Measuring Attitudes Toward Wife Beating
Sometimes a man is annoyed or angered by things that his wife does. In your opinion, is a man justified in hitting or beating his wife in the following situations: she goes out without telling him she neglects the children she argues with him she refuses to have sex with him she burns the food In Demographic Health Surveys, attitudes toward wife beating are based on a five-item question asking the respondent whether, in his or her opinion, a husband would be justified in hitting or beating his wife under the following circumstances: (1) “she goes out without telling him” (her husband); (2) “she neglects the children”; (3) “she argues with him”; (4) “she refuses to have sex with him”; and (5) “she burns the food”. Response categories are “yes”, “no” and “don’t know”. Using responses to all five questions, special statistical methods are used to generate a composite indicator measuring approval of wife beating.

56 Activity 12: Specifying Indicator Metrics
Return to small groups from previous activities Look at the indicators that your group has selected to measure progress towards your goals and objectives Define the metrics for each indicator Class Activity 12: Specifying Indicator Metrics Return to small groups from previous activity. Using the indicators that the group selected earlier to measure program towards its goals and objectives, define the indicator metrics. Have a member of the group present the indicator metrics to the entire class.

57 Indicator Reference Sheets
Distribute Handout 9

58 Setting Indicator Targets: Useful Information Sources
past trends client expectations donor expectations expert opinion research findings what has been accomplished elsewhere It is important that targets be established for performance indicators in order to assess how much change or improvement has been achieved. Setting performance targets can be challenging. Collaborating with others who are knowledgeable about the local situation, and who understand what level of accomplishment can be expected is key to setting targets that can realistically be achieved, given the available resources, time frame and environment. There are various approaches that you should consider when trying to determine appropriate target levels: (1) look at past trends and project them into the future; (2) consider client and donor expectations; (3) consult experts to determine appropriate target levels; (4) review the literature to find out what has been accomplished elsewhere; (5) consider how long it takes to see change. For example, how long will it take for your education and awareness programs to affect the sexual behaviors of your targeted at-risk population? Equally important is documenting the rationale for the targets so that you can refer to this reasoning when trying to analyze or interpret actual data later on. Note the Indicator reference sheet also requires that you describe the rationale that was used in setting targets under “Notes on Baselines/Targets”.

59 Activity 13: Setting Indicator Targets Based on Past Trends
One of the objectives of the Ministry of Women’s Affairs in Country X is to reduce the prevalence of intimate partner violence. The 2000 DHS showed that 30% of ever-married women aged years in the country had ever experienced intimate partner emotional, sexual, or physical violence. A comparable DHS conducted in showed that the prevalence of intimate partner violence (emotional, physical, or sexual) was 25% among ever-married women aged Based on these trends, what would be a realistic national target for this indicator in 2010? Class Activity 13: Setting Indicator Targets Based on Past Trends There was a 17 percent decrease between 2000 and If we assume that the same magnitude of change would be observed at the national level over the next five years, a realistic target for the percentage of ever-married women aged who have ever experienced intimate partner violence (emotional or physical or sexual) in 2010 would be 20%.

60 Types of Information Sources
Quantitative data useful for tracking trends accurately and highlighting differences Qualitative data useful for understanding the context in which the trends/differences occur and to interpret quantitative data accurately (e.g., focus groups, in-depth interviews) Both quantitative and qualitative data are useful for monitoring and evaluating GBV programs. Quantitative data are useful for tracking trends and highlighting differences. Without further analysis, they will not explain why or how differences occur. Qualitative data provide the answers to “how” and “why” questions. Qualitative data shed light on the context in which the differences occur and allow you to interpret the quantitative data accurately. Discussion: Ask: How have you used quantitative and qualitative data together in the past to analyze trends or differences? What data were collected? How did use of both quantitative and qualitative data assist with understanding the program/project and results? What problems were encountered?

61 Quantitative Information Sources
Population-based surveys DHS (domestic violence module), VAW/G survey, facility surveys Client exit interviews; provider surveys; clinic observations Health service statistics Program statistics Crime statistics (of poor quality in many LDCs) Behavioral surveillance systems Data from a variety of sources are used to calculate indicators for monitoring and evaluating GBV programs. These include population-based surveys, facility surveys, health service statistics, qualitative data, and program statistics. Some of these data sources, such as population-based surveys, have been around for a long time and are familiar to most people. Population-based surveys can include a general health survey that includes a module on violence against women (such as the DHS) or one which is focused on this topic alone. The latter is recommended because focused surveys have revealed higher prevalence estimates (and likely less under reporting) than national surveys designed to measure other health outcomes in addition to violence. Population-based surveys provide information on individuals’ knowledge, attitudes, and behaviors related to GBV. Facility surveys can include client exit interviews, provider surveys, and clinic interviews. Provider surveys can provide information on provider knowledge, attitude and practices related to GBV. Clinic observations gather information about the human and physical resources available in the clinic to fight GBV, on the availability of written protocols related to GBV care, referral and reporting, and on whether the clinic offers emergency contraception. Client exit interviews gather information on clients’ opinions about the services they have received. Health services statistics provide information about clients, for example, the numbers of clients who said “yes” to screening questions and the number of clients who disclosed GBV and were referred. Program sources of information may come in many different forms depending on program activities and may include case studies related to legal procedures; training records and IEC activities. These sources of data have to do with recording events as they occur. Information may also be obtained from crime statistics on substantiated cases of GBV, prosecutions and convictions; incident-based reporting systems, and homicide reports but these are of poor quality in many developing countries. Youth behavior surveillance systems are also a valuable source of information.

62 Triangulating Data Sources: Soul City
Community 6. Cost – Effectiveness Study Individual 1. National Survey 2. Sentinel Site Studies 3. (National) Qualitative Impact Assessment Society Often, it is necessary to bring together many different sources of data. This slide shows you the type of information sources that Soul City used to monitor and evaluate its communication-for-change interventions. Soul City wanted to measure change at various sites where gender-based violence occurs. These sites include the individual-level, the community level and the societal level. This made it necessary to have multiple sources of data that were brought together in order to obtain a more complete picture of how the program had brought about change at the three levels. At the individual level, data came from the national survey and sentinel site studies. At the community level, data sources included qualitative impact assessment through focus groups, which were supplemented by social surveys. At the organizational and institutional level, data were derived from the National Network on Violence Against Women (NNVAW) partnership study. At the societal level, data were derived from media and policy studies. In addition, a cost-effectiveness study was implemented. Many organizations may not have the resources to conduct such a comprehensive evaluation but may be able to draw on existing surveys/studies or data systems to meet their information needs. 4. NNVAW Partnership Study 5. Media Monitoring and Analysis

63 Activity 14: Identifying Information Sources for Indicators
Return to small groups from previous activities Look at the indicators that your group has selected to measure progress towards your program goals and objectives For each indicator, identify the potential source of data needed for its calculation What challenges might you face in using these potential information sources? After 10 minutes, a member of the group will present

64 Guiding Questions for Choosing an Evaluation Design
What question does your program need to answer? What do you want to measure (indicators)? How sure to you want to be? (VERY SURE = GOOD ) What is the cost of making a mistake (low, medium, high)? When do you need the results? (FAST) How much are you willing to pay? (CHEAP) Has the program already started? Before deciding on an appropriate evaluation design, let us come back to five fundamental question. The first is why do an evaluation? The complexity of an evaluation design depends on how you intend to use the results. For example, a local program manager may need data on provision and utilization to improve them within a health center or district. On the other hand, national and international agencies may require assessments of impact to justify further investments in a program. The second question pertains to indicators: what do you want to measure? One might want to evaluate the provision or utilization of services or one might want to evaluate coverage or impact. The third consideration refers to the type of inference: How sure do you want to be? How confident must you be that any observed effects were in fact due to the program? Some evaluation designs are stronger than others in their attempts to discuss and rule out all other likely explanations for observed improvements in health outcomes and behavior. One way of thinking about how confident you want to be that an program is effective is to ask yourself: what is the cost of making a mistake? For example, if you are investing in a vaccine, you will want to have the highest level of confidence that the vaccine works. The cost of making a mistake would be enormous and you would want to implement the strongest evaluation methodology possible. However, if you are trying to find out what is the best way to improve people’s knowledge about a sexual harassment policy, you may be able to live with a less rigorous evaluation design. From a programmatic point of view, even less stringent designs may be sufficient for deciding about the future of a program. However, from a donor perspective, stronger designs may be needed in order to justify further investments in a program. The third consideration is when do you need the results. While service provision or utilization may be addressed using routine information systems which can provide annual indicators, an assessment of coverage or impact almost always requires field data collection, which takes time. Finally, how much are you willing to pay for the evaluation? There are important cost implications associated with more rigorous designs. If your program has already started, there are certain evaluation designs that you will not be able to use.

65 Choosing an Evaluation Design
GOOD, FAST, CHEAP: PICK 2! HIRE SOMEONE TO HELP YOU OUT! NO EVALUATION DESIGN IS IMMUNE TO PROBLEMS! In choosing an evaluation design, one is often balancing statistical rigor, timing and cost. More rigorous evaluation designs are costly and will may not provide data within a short time frame. As measuring program impact requires extensive knowledge of sampling, research methods and statistics, it is recommended that programs hire consultants to help them out to maximize the quality of their evaluations. No evaluation design is immune to problems; there is a long list of possible complications associated with any evaluation study.

66 Impact Evaluation Needs Specific Design
Evaluating program impact; interested in attribution/causality Experimental (with randomization) or quasi-experimental design Program already implemented; no baseline Post-test only design (then multivariate analysis) Program not yet implemented Pretest-post-test design; time series design Impact evaluation means attributing change to a particular program or intervention by ruling out all other possible explanations. Impact evaluation requires a specific study design. This slide presents several different types of study design and how to decide which study design is appropriate. Whenever possible it is best to create an experimental design with a randomly assigned control group. When random assignment is not possible, the next best thing is to find a comparison group that is as similar to the intervention group as possible. This is what is called a quasi-experimental design. If the intervention has already happened and no baseline information is available, you should use a post-test only design and consider using multivariate analytic techniques. If the intervention has not yet happened and you cannot use an experimental or quasi-experimental design, then it would be best to use a time series design. If a times series design cannot be used, at least try to obtain baseline (pretest) information that can be compared against post-test information (that is, a pretest-post-test design). Always keep in mind the issue of validity. Are the measurements true? Do they do what they are supposed to do? Are there other factors that might explain the program results? Real world conditions such as available funds, time and resources obviously affect the type of study design that is selected to measure program impact. The study design also depends on what question a program needs to answer.

67 When monitoring and evaluating GBV programs, the confidentiality, privacy, safety, and well-being of GBV survivors must be top priority at all times.

68 References Bloom, SS. Violence Against Women and Girls: A Compendium of Monitoring and Evaluation Indicators (MS-08-30). Chapel Hill, NC: MEASURE Evaluation, Carolina Population Center, the University of North Carolina and Chapel Hill; 2008. Bott S, Guedes A, Claramunt MC, Guezmes A. Improving the Health Sector Response to Gender-Based Violence: A Resource Manual for Health Care Professionals in Developing Countries. New York: International Planned Parenthood Federation, Western Hemisphere Region; 2004. Interagency Gender Working Group. Addressing Gender-Based Violence from the Reproductive Health/HIV Sector: A Literature Review. Washington: The Population Technical Assistance Project (POPTECH); 2004.

69 References (continued)
Soul City 4. Impact Evaluation; Violence Against Women, Vols. I and II World Health Organization (WHO). WHO Ethical and Safety Recommendations for Researching, Documenting and Monitoring Sexual Violence in Emergencies. Geneva: World Health Organization; 2007.


Download ppt "MODULE II Monitoring and Evaluating Gender-Based Violence Prevention and Mitigation Programs Anastasia J. Gage Melissa Dunn."

Similar presentations


Ads by Google