Presentation is loading. Please wait.

Presentation is loading. Please wait.

Practical Approaches to Evidence-Based Evaluation Practice in Public Health Joseph Telfair, DrPH, MSW/MPH Professor Department of Public Health School.

Similar presentations


Presentation on theme: "Practical Approaches to Evidence-Based Evaluation Practice in Public Health Joseph Telfair, DrPH, MSW/MPH Professor Department of Public Health School."— Presentation transcript:

1

2 Practical Approaches to Evidence-Based Evaluation Practice in Public Health Joseph Telfair, DrPH, MSW/MPH Professor Department of Public Health School of Health and Human Performance University of North Carolina at Greensboro Greensboro, NC (USA) j_telfai@uncg.edu ♦ (336) 334 - 3240_telfai@uncg.edu

3 OVERVIEW OF PRESENTATION Best Practices/Evidence: MCHB Perspective Setting the Stage: Why Important, Definitions and Key Concepts Performance Measurement: Selecting and Constructing Measures Process Monitoring: Developing a Monitoring System Concluding Remarks Questions and Discussion

4 Tell me....I Forget Show me....I remember Involve me....I understand Chinese Proverb

5 Of Relevance The MCHB has developed key strategies that are the broad, cross-cutting approaches the Bureau uses in order to reach its five-year (and beyond) goals in the Bureau Strategic Plan. Goal 4 of the Strategic Plan is: “Improve the Health Infrastructure and Systems of Care.” One key strategy used to support this goal is: “Using the best available evidence, develop and promote guidelines and practices that improve services and systems of care.”

6 Best Practices/Evidence MCHB Perspective ( http://mchb.hrsa.gov/about/strat plan03-07.htm)

7 Best Practices/Evidence (1) MCHB/AMCHP defines “best practices” as a continuum of practices, programs and policies ranging from promising to evidence-based to science- based EVALUATION of best practices requires the identification and establishment of evidence

8 Evaluating Evidence Evidence can be evaluated in four categories Research Expert Opinion Field Lessons Theoretical Rationale All best practice approaches reported have a strong conceptual/theoretical rationale However, the strength of evidence from research, expert opinion and field lessons fall within a spectrum

9 Strength of Evidence Spectrum Promising Best Practice Approaches Research + Expert Opinion + Field Lessons + Theoretical Rationale +++ Proven Best Practice Approaches Research +++ Expert Opinion +++ Field Lessons +++ Theoretical Rationale +++

10 Strength of Evidence Spectrum Promising Best Practice Approaches Little research A beginning of agreement in expert opinion Very few field lessons evaluating effectiveness Proven Best Practice Approaches Supported by strong research Extensive expert opinion from multiple authoritative sources Solid field lessons evaluating effectiveness

11 Grading Evidence (1) Research/Evaluation + A few studies in public health reporting effectiveness (Promising) ++ Descriptive review of scientific literature supporting effectiveness (Promising/Proven) +++ Systematic review of scientific literature supporting effectiveness (Proven)

12 Grading Evidence (2) Expert Opinion + An expert group or general professional opinion supporting the practice (Promising) ++ One authoritative source (such as a national organization or agency) supporting the practice (Promising/Proven) +++ Multiple authoritative sources (including national organizations, agencies or initiatives) supporting the practice (Proven)

13 Grading Evidence (3) Field Lessons/Promising Practices + Successes in state practices reported without evaluation documenting effectiveness (promising) ++ Evaluation by a few states separately documenting effectiveness (promising/proven) +++ Cluster evaluation of several states (group evaluation) documenting effectiveness (proven)

14 Grading Evidence (4) Practice-based Conceptual/Theoretical Rationale +++ Only practices which are linked by strong causal reasoning to the desired outcome of improving health and total well-being of priority populations will be reported (proven)

15 Best Practices/Evidence (2) MCHB has established that family and community participation and engagement are key to the development of effective, quality health systems and services Testing of Best Practices to Build Evidence – Deduction to Verification to Induction - Repeats Requires a Practical Approach to Evaluation

16 Setting the Stage

17 WHY? (1) Four Primary reasons: To develop and maintain an effective program and service delivery process at the state and local level To enhance staff’s understanding of the factors that contribute to the extent to which and in what ways the specific aims program service targets evaluation objectives are being followed

18 WHY? (2) Four Primary reasons (cont): To assure staff and stakeholders by putting in place a process for determining whether or not the program and service delivery activities are succeeding as planned

19 WHY? (3) Four Primary reasons (cont): To Build best practices data by Assessing the application of ‘the best available evidence’ from (MCHB modified) (4 levels): Existing Research/Evaluation Expert Opinion Field Lessons/Promising Practices Practice-based Conceptual/Theoretical Rationale

20 Definitions and Key Concepts (1) Definition: “Evaluation or program measurement (PM) is a systematic process for staff and institutions to obtain information on the service delivery process, its outcomes, and the effectiveness of its work, so that they can improve the process and describe its accomplishments” Mattessich, PW (2003) (p. 3) [modified]

21 Definitions and Key Concepts (2) Definition: Program monitoring is the process of assessing progress toward achievement of a service delivery process’s objectives to determine whether the process was implemented as planned (Peoples-Sheps & Telfair (2005 – See Handout)

22 Definitions and Key Concepts (3) Evaluation or PM involves a comparison of the staff’s planned processes and outcomes with selected standards in order to assess accomplishments Evaluation or PM involves the application of social science methods to determine whether assessed efforts are the cause of observed results

23 Definitions and Key Concepts (4) Evaluation or PM relies on both qualitative and quantitative methods, and often a triangulation of the two, to produce informative results

24 Definitions and Key Concepts (5) Program monitoring is carried out by assessing the extent to which a program is implemented as designed that involves tracking progress toward achievement of a program’s objectives (Peoples-Sheps & Telfair (2005) It is a very traditional form of assessment that is generally considered an administrative function and integral to the ongoing operations of every program (Kettner, et al., 1999).

25 Definitions and Key Concepts (6) Definition: A performance measure is a specific, quantitative or qualitative representation (measure) of a capacity, process, or outcome deemed relevant to the assessment of program performance (Peoples-Sheps & Telfair (2005))

26 Definitions and Key Concepts (7) Both program monitoring and performance measures depend on strong, meaningful measures of program and service delivery process performance

27 PRACTICE EXERCISE Questions 1- 4

28 Performance Measurement

29 Selecting or Constructing Measures (1) Deciding what to measure is an essential first step The aspects the service delivery process that are measured attract attention and generate action (Hatry, 1999). Conversely, aspects not measured may go unnoticed until a crisis brings them to the surface (e.g., discovery of inadequate data collection efforts that did not allow for population or service targets to be met)

30 Selecting or Constructing Measures (2) If the staff takes the time to think through what is needed, they are much less likely to miss something important To cover all of the bases, start with the monitoring and evaluation’s specific aim(s) or hypothesis(es) to identify the main program and service delivery efforts and expected outcomes

31 Selecting or Constructing Measures (3) To construct performance measures, three tasks must be undertaken: identifying concepts to be measured selecting or constructing measures locating or developing data sources

32 Selecting or Constructing Measures (4) Performance measures can be formulated in many different ways. They may be: numbers (number of TB deaths) rates (TB mortality rate) proportions or percentages (percentage of days missed at work among person with TB)

33 Selecting or Constructing Measures (5) Performance measures can be formulated in many different ways. They may be (cont): averages (average number of emergency department visits per person 18 to 44 years of age in a given year) Categories (team meetings held) Numbers, percentages, and rates are the most frequently used in MCH

34 Selecting or Constructing Measures (6) Numbers, percentages, and rates are the most frequently used in MCH Least used, but often just as critical are Qualitative indicators such as consensus measures, aggregated (agreement/ disagreement) statements, archival text- based descriptors (e.g., policy statements and group opinions from advisor or consumer groups

35 Selecting or Constructing Measures (7) It is often helpful to include numbers and qualitative indicators along with rates and percentages so that the latter measures can be understood in the context of the type of service focus for which they were derived To select or develop high-quality performance measures, candidate measures are generally assessed according to criteria that represent both scientific rigor and practical relevance

36 Selecting or Constructing Measures (8) Responsive measures are able to detect a change Measures need to be understandable to the audience to whom they will be presented Regardless of how it is formulated, a measure should have very precise wording, a specific timeframe, and a clearly defined research population (e.g., persons with TB - Quant) or set of tasks (e.g., steps for securing needed sample - Qual)

37 Selecting or Constructing Measures (9) A performance measure should be meaningful, valid, reliable, responsive, and understandable and should allow for risk adjustments (errors)

38 Selecting or Constructing Measures (10) A valid measure is one that measures what it intends to measure. Validity, like all of the qualities in this list, is measured on a continuum, meaning that some measures have greater validity than others

39 Selecting or Constructing Measures (11) Reliable performance measures can be reproduced regardless of who collects the data or when they are collected (assuming the true results have not changed) Like validity, reliability is viewed as a continuum

40 Selecting or Constructing Measures (12) The selection of measures is closely tied to the data or research project information available to construct them Data or information sources should Be of high quality, with standardized definitions (as defined and agreed upon by the research team) and data collection methods and Have acceptable levels of validity and reliability on the items of interest

41 Selecting or Constructing Measures (13) Data or information sources should (cont) Be available within the program service delivery timeframe (e.g., 3 years) Have cost conforming to budgetary constraints of the program It is more efficient, but not essential, to construct measures from existing, or secondary, data sources, rather than to collect new data specifically for a given set of performance measures

42 PROCESS MONITORING

43 Source: Mattessich, PW (2003). p. 10

44 Developing a Monitoring System (1) Development of a monitoring system is an essential component of program and service delivery process measurement plan The monitoring process described in this presentation identifies the program’s objectives the base from which formulas to measure progress are developed

45 Developing a Monitoring System (2) The monitoring process described in this presentation (cont) relative strength or emphasis of a measure is assigned as necessary data collection plans are developed achievement scores are calculated at predetermined intervals

46 Developing a Monitoring System (3) Start with the Aim-linked Objectives The objectives of a Specific Aim, each of which consists of a performance measure and a target, serve as the foundation for project monitoring Fully developed, measurable objectives must correspond with the program or service purpose Performance measures must be developed as the program is being planned

47 Developing a Monitoring System (4) Each objective should have an explicit date by which the target is to be achieved (see example next slide) With objectives clearly and precisely stated, the next challenge is to develop a system through which progress towards meeting the program’s targets can be monitored

48 Performance MeasureTarget Percentage of adults in village 2 by desired gender and age within normal range A 7% increase over baseline (estimated at 80%) Average amount of time spent collecting staff comments per week by program assistants Four hours Number adults from Village 2 in the project shuttled to and from the city for the purpose of data gathering Thirty adults sampled 80% of the allocated study days per month

49 Developing a Monitoring System (5) The information derived from monitoring shows which program objectives need more attention in the future and whether any of them require less intensive work If the process has fallen short on some objectives, this information should trigger an in-depth search for the reasons expected targets were not achieved

50 Developing a Monitoring System (6) The Table on the slide to come shows the components of a monitoring system The first two columns are identical to those in the previous slide showing performance measures and targets

51 Developing a Monitoring System (7) The remaining three columns represent the basic elements of a monitoring system, as it builds on the program’s Specific Aims linked objectives See Expanded Matrix (Handout)

52 Performance MeasureTargetFormula to Measure Progress Results at End of Year 1 Achieve ment Score Percentage of adults in village 2 by desired gender and age within normal range A 7% increase over baseline (estimated at 80%) Percentage over baseline with BMI within normal range 7 1.75%0.25 Number adults from Village 2 in the project shuttled to and from the city for the purpose of data gathering Thirty adults sampled 80% of the allocated study days per month Number of adults sampled 80% of study days 30 240.80 Average amount of time spent collecting comments per week by program assistants Four hoursNumber of hours spent in collecting comments 4 3.2 hours0.80

53 Developing a Monitoring System (8) Formulas The first step in developing a monitoring system is to construct formulas to reflect progress toward achievement of the objectives’ targets. The formula is based on the principle that a score of 1.00 is complete accomplishment

54 Developing a Monitoring System (9) Formulas (cont) For example, A score of 0.99 or lower signifies that the performance measure fell short of the target; a score that exceeds 1.00 indicates greater than expected achievement

55 Developing a Monitoring System (10) Formulas (cont) Three types of formulas can serve this purpose When the target is a percentage, proportion, or a simple count, the most informative and frequently used formula involves dividing the level of actual achievement at a specified time with the level given in the target - Actual value Targeted value

56 Developing a Monitoring System (11) Data collection plan The first three columns of the previous Table should be completed with the project’s initial plan. To create a fully operational monitoring system, one more step is required: data items and sources necessary to construct performance measures should be identified

57 Developing a Monitoring System (12) This step should not be missed even if some data sources seem obvious since it is far too common to discover that researchers had incorrectly assumed the necessary data would be available and accessible when needed

58 Developing a Monitoring System (13) Analyses/Interpretation of Results The information derived from monitoring shows which objectives need more attention in subsequent years and whether any of them require less intensive work Adjustments in resource allocations can be based on the needs of specific objectives for more or less effort

59 Developing a Monitoring System (14) Analyses/Interpretation of Results (cont) Careful assessment of the reasons for shortfalls on objectives should be conducted before any reallocation decisions are made. A review of end of year achievement scores provides helpful information for further investigation and subsequent adjustments to the process

60 PRACTICE EXERCISE Questions 5 - 9

61 IN CONCLUSION

62 IN CONCLUSION (1) Service Programs may not reach their targets for a number of reasons A primary reason is inadequate resources, which may take the form of insufficient funds across the board or misallocation of funds across Specific Aims linked objectives It may be possible to detect misallocation if some targets are overachieved, whereas others fall short

63 IN CONCLUSION (2) Other commonly cited reasons why programs may fall short in achieving objectives include a lack of adequate knowledge about feasible target levels external factors that make it difficult or impossible to reach the target (e.g., inability to find or retain clients that meet the program criteria) inaccurate measurement of the objective a conceptual error in the program purpose

64 IN CONCLUSION (3) As an evaluation strategy, monitoring has three important shortcomings First, it does not produce evidence of cause–effect relationships; only evaluation research can do that. Second, the results of monitoring are limited to a single program; they cannot be extrapolated from one program to another

65 IN CONCLUSION (4) As an evaluation strategy, monitoring has three important shortcomings (cont) Finally, there are no firm guidelines for interpretation of the scores Although a score of 0.70 might be considered good and 0.90 might be superior, the most useful interpretations depend on the program’s context and purpose (Peoples-Sheps, Rogers, & Finerty, 2002).

66 IN CONCLUSION (5) Advantages and Disadvantages of Monitoring as an Evaluation effort Program monitoring is a valuable tool for building and establishing evidence Program monitoring is a valuable tool for planning and management decisions The process is inexpensive and can be applied readily by anyone with entry- level training or experience

67 IN CONCLUSION (6) Advantages and Disadvantages of Monitoring as an Evaluation effort (cont) It includes a flexible set of methods that can be modified to accommodate the needs of each service program at both the state and local level Monitoring requires staffs to develop objectives that serve as the basis of the service delivery process and then to plan for necessary data so that the capability for tracking progress is assured

68 IN CONCLUSION (7) Another important advantage is that it encourages the production of information for critical management decisions, identifying and assessing Promising/Best Practices in both short- and long-term time frames and across all levels of the service delivery process Thus, it is compatible with most governmental programmatic guidelines

69 “Just because you can quantify something, doesn’t mean you understand it” (Aubel, 1993, p. 10) “Not everything that counts can be counted and not everything that be counted counts” Anonymous

70 References (1) Peoples-Sheps, M. D., Byars, E., Rogers, M. M., Finerty, E. J., & Farel, A. (2001). Setting objectives (revised). Chapel Hill, NC: The University of North Carolina at Chapel Hill. Peoples-Sheps, M. D., & Telfair, J (2005). Maternal and Child Health Program Monitoring and Performance Appraisal in J. Kotch (ed). Maternal And Child Health: Programs, Problems And Policies In Public Health, 2nd. Edition (Chapter 16). Boston, MA: Jones & Bartlett Publishers

71 References (2) Grembowski, D. (2001). The practice of health program evaluation. Thousand Oaks, CA: Sage Publications. Hatry, H. P. (1999). Performance measurement: Getting results. Washington, DC: The Urban Institute Press. Kettner, P. M., Moroney, R. M., & Martin, L. L. (1999). Designing and managing programs: An effectiveness-based approach (2nd ed.). Thousand Oaks, CA: Sage Publications.

72 References (3) Durch, J. S., Bailey, L. A., & Stoto, M. A. (Eds.). (1997). Improving health in the community: A role for performance monitoring. Washington, DC: National Academy Press. Mattessich, PW (2003). The Manager’s Guide to Program Evaluation. Saint Paul, MN: Wilder Publishing Center

73 References (4) Roberts, A. R. & Yeager, K. (2004), Evidence-based Practice Manual: Research and Outcome Measures in Health and Human Services, Oxford University Press Aubel, J (1993), Participatory program Evaluation: A manual for involving program stakeholders in the evaluation process. The Gambia: Catholic Relief Services – USCC.

74 References (5) Telfair, J., & Mulvihill, B.A. (2000), Bridging science and practice: The integrated model of community-based evaluation. Journal of Community Practice, 7(3), 37-65.

75 Questions?


Download ppt "Practical Approaches to Evidence-Based Evaluation Practice in Public Health Joseph Telfair, DrPH, MSW/MPH Professor Department of Public Health School."

Similar presentations


Ads by Google