Strategic Planning Session David Rudder, Ph.D. Rudder Consultants, LLC. May 17, 2006.

Slides:



Advertisements
Similar presentations
The Nominal Group Technique Chapter 42 Research Methodologies.
Advertisements

Introduction to Monitoring and Evaluation
Intelligence Step 5 - Capacity Analysis Capacity Analysis Without capacity, the most innovative and brilliant interventions will not be implemented, wont.
Evaluation What, How and Why Bother?.
What You Will Learn From These Sessions
Designing an Effective Evaluation Strategy
Chapter 3: Foundations of Planning
Project Monitoring Evaluation and Assessment
Decision Making Tools for Strategic Planning 2014 Nonprofit Capacity Conference Margo Bailey, PhD April 21, 2014 Clarify your strategic plan hierarchy.
Ray C. Rist The World Bank Washington, D.C.
Strategic Planning and the Marketing Management Process
A brief overview What is program evaluation? How is an evaluation conducted? When should it be used? When can it be used? Used with Permission of: John.
Decision-Making and Strategic Information Workshop on M&E of PHN Programs July 24-August 11, 2006 Addis Ababa.
The Lumina Center Grantseeking Workshop Series Presents Outcomes & Evaluations April 20, 2006.
Auditing A Risk-Based Approach To Conducting A Quality Audit
Evaluation. Practical Evaluation Michael Quinn Patton.
Key Elements, Emerging Trends and Promising Practices in Monitoring and Evaluation of Development Results Lawrence S. Cooley, President Management Systems.
Legal & Administrative Oversight of NGOs Establishing and Monitoring Performance Standards.
The Library Balanced Scorecard: The Results Please! Joe Matthews American Library Association June 2007.
Molly Chamberlin, Ph.D. Indiana Youth Institute
P e r f o r m a n c e Measuring Results of Organizational Performance Lesson 4 Performance Methodology: The Balanced Scorecard.
Objective Explain What is the Balanced Scorecard
How to Develop the Right Research Questions for Program Evaluation
Internal Auditing and Outsourcing
Developing a Partner Reward Strategy – to build competitive advantage Peter Scott Consulting
ISTEP: Technology Field Research in Developing Communities Instructor: M. Bernardine Dias CAs: Sarah Belousov and Ermine Teves Spring 2009.
1-1 Strategic Planning and the Marketing Management Process Chapter 1 McGraw-Hill/Irwin Copyright © 2013 by The McGraw-Hill Companies, Inc. All rights.
Strategic Planning and the Marketing Management Process.
Copyright © 2011 The McGraw-Hill Companies All Rights ReservedMcGraw-Hill/Irwin Chapter 1 Strategic Planning and the Marketing Management Process.
Do it pro bono. Strategic Scorecard Service Grant The Strategy Management Practice is presented by Wells Fargo. The design of the Strategic Scorecard Service.
PART II – Management Audit: Basic Standards, Values and Norms Shared by Pratap Kumar Pathak.
Missouri Integrated Model Mid-Year Meeting – January 14, 2009 Topical Discussion: Teams and Teaming Dr. Doug HatridgeDonna Alexander School Resource SpecialistReading.
Chapter 1 Introduction Managers and Managing.
FewSomeAll. Multi-Tiered System of Supports A Comprehensive Framework for Implementing the California Common Core State Standards Professional Learning.
MEET U.S. Performance Measurement Confidential – Do not Distribute NP STRATEGIES MEASURING PERFORMANCE IN THE NONPROFIT ORGANIZATION MEET U.S. In-Region.
Logistics and supply chain strategy planning
Fundamentals of Evaluation for Public Health Programs ROBERT FOLEY, M.ED. NIHB TRIBAL PUBLIC HEALTH SUMMIT MARCH 31,
© 2013 Cengage Learning. All Rights Reserved. 1 Part Four: Implementing Business Ethics in a Global Economy Chapter 9: Managing and Controlling Ethics.
M A N A G E M E N T M A N A G E M E N T 1 st E D I T I O N 1 st E D I T I O N Gulati | Mayo | Nohria Gulati | Mayo | Nohria Chapter 10 Chapter 10 PERFORMANCE.
O F F I C E O F T H E Auditor General of British Columbia 1 OAG Review of the Performance Agreements between MoHS and Health Authorities.
© 2014 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
School Leadership Module Preview This PowerPoint provides a sample of School Leadership Module PowerPoint. The actual Overview PowerPoint is 73 slides.
Results Based Management Introduction to Results Based Management.
1 Unit 1 Information for management. 2 Introduction Decision-making is the primary role of the management function. The manager’s decision will depend.
Private & Confidential1 (SIA) 13 Enterprise Risk Management The Standard should be read in the conjunction with the "Preface to the Standards on Internal.
Monitoring & Evaluation: The concepts and meaning Day 9 Session 1.
MAINSTREAMING MONITORING AND EVALUATION IN EDUCATION Can education be effectively managed without an M & E system in place?
REAL WORLD RESEARCH THIRD EDITION Chapter 8: Designs for Particular Purposes: Evaluation, Action and Change 1©2011 John Wiley & Sons Ltd.
McGraw-Hill/Irwin Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. Chapter 3 Identification and Selection of Development Projects.
D1.HRD.CL9.06 D1.HHR.CL8.07 D2.TRD.CL8.09 Slide 1.
Measurement Systems. Development of Information Information is necessary for both control and improvement Information derives from analysis of data Data,
Project Management Learning Program 23 Mar – 3 Aprl 2009, Mekong Institute, Khon Kaen, Thailand Managing for Development Results Results-Oriented Monitoring.
Information, Analysis, and Knowledge Management in the Baldrige Criteria Examines how an organization selects, gathers, analyzes, manages, and improves.
Community Planning 101 Disability Preparedness Summit Nebraska Volunteer Service Commission Laurie Barger Sutter November 5, 2007.
Evaluation design and implementation Puja Myles
Developing SEA Change’s Evaluation Plan
Pete Williams Governor’s Office for Innovation in Government.
1 The project is financed from the European Union funds within the framework of Erasmus+, Key Action 2: Cooperation for innovation and the exchange of.
Session VI Evaluation & Wrap Up. Evaluation Activities directed at collecting, analyzing, interpreting, and communicating information about the effectiveness.
Monitoring Afghanistan, 2015 Food Security and Agriculture Working Group – 9 December 2015.
Erik Augustson, PhD, National Cancer Institute Susan Zbikowski, PhD, Alere Wellbeing Evaluation.
Company LOGO. Company LOGO PE, PMP, PgMP, PME, MCT, PRINCE2 Practitioner.
Alex Ezrakhovich Process Approach for an Integrated Management System Change driven.
PRINCIPLES OF MANAGEMENT – DDPQ2532 INTRODUCTION.
Research And Evaluation Differences Between Research and Evaluation  Research and evaluation are closely related but differ in four ways: –The purpose.
Demonstrating Institutional Effectiveness Documenting Using SPOL.
 In Ned law are a company that provides strategic consulting and management, composed of a team of high academic and social esteem, focused on optimization,
Data for PRS Monitoring: Institutional and Technical Challenges
Presentation transcript:

Strategic Planning Session David Rudder, Ph.D. Rudder Consultants, LLC. May 17, 2006

EVALUATING THE EFFECTIVENESS OF NONPROFIT ORGANIZATIONS Chapter 14 Vic Murray

What is Evaluation and why is it important?  Evaluation is the process of gathering information on the results of past activities for the purpose of making decision about them.  Organizational effectiveness evaluation (OEE) occurs when this process is applied to assessing the state of the organization as a whole. Typically, this refers to how well it is achieving its stated mission and involves looking at goal attainment as well as how efficiently it has performed.

What is Evaluation and why is it important?  The focus of most evaluations is on the organization, but much still boils down to looking at the results of the actions of many units and individuals.  Increasing “accountability,” refers to the belief that nonprofits, and the people who run them, should be more accountable to those they are created to serve and those who provide the money to operate them. This is the primary cause of the growing interest in organizational evaluation.

Politics  Politics is inevitable in evaluation because there is so much room for subjectivity that differences can easily arise between the parties involves. Key parties are:  Evaluators  Evaluates  Other interested stakeholders

Stages of the Evaluation Process  The Design Stage – What is the purpose of organizational evaluation? What exactly will be measured – inputs, activities or processes, outputs or outcomes?  Implementation Stage – How will the information be gathered?  Interpretation Stage – What will be considered a “success” or a “failure”? If an evaluation measure reveals problems, there is a question of drawing conclusions about why these occurred in order to make decisions about the future.  Application Stage – How will the information be used in subsequent decision making?

Problems with Evaluations  Works best when the measurements can be compared to clearly stated goals, objectives, or standards that a given organization is trying to achieve.  Often when goals are vague and ambiguous, the development of valid measures of these kinds of things is technically challenging, costly, and subjective.  A design that is measuring one level of an organization but generalizing to another.

Problems with Evaluations  Even when everyone focuses on outcomes and agrees on what should be evaluated, there are inevitable difficulties over the extent to which outcome measures really capture the goals they are intended to measure.  Most evaluation systems are unable to provide conclusive analyses of why the results came out as they did. Most outcomes have multiple causes, and opinions can easily differ over which are the most important.

The United Way Approach  The outcome information is intended to be used by UW to help member agencies improve program performance, to identify and achieve UW priorities (funding allocation criteria), and to broaden the base of financial and volunteer support.  Implementation of the outcome measurement system is divided into six stages: 1) Building agency commitment and clarifying expectations 2) Building agency capacity to measure outcomes 3) Identifying outcomes, indicators, and data collection methods 4) Collecting and analyzing outcome data 5) Improving the outcome measurement system 6) Using and communicating outcome information

United Way Continued  Agencies are not expected to establish targets until they have at least one year of baseline data.  The system discourages the use of benchmark-based relative standards or those that involve comparison with similar programs that are considered exemplary until accurate outcome data are available.  It is generally understood that in the first few years of an outcome measurement system, the data often say more about what is wrong with the evaluation system than about what is taking place in the programs.

The Balanced Scorecard  A multiattribute system for conceptualizing and measuring performance. It assumes that the primary goal of a business is long run profit maximization. It argues that this will be achieved through a “balanced scorecard of performance attributes” grouped around four perspectives”: 1) The funders perspective, measuring various financial performance indicators or primary interest to shareholders 2) The client perspective, comprising measures of client satisfaction 3) The internal business perspective, which means internal efficiency and quality 4) The innovation and learning perspective, which attempts to measure the organization’s ability to adapt to changes required by a changing environment In the case of nonprofit organizations, their mission statement becomes the endpoint to be reached through these perspectives. The process starts with defining what that is and identifying outcome indicators that will reveal the extent to which it is being achieved.

CCAF/FCVI Framework  Puts forward 12 “attributes of effectiveness,” suggesting that organizations can be audited in terms of how well they manifest these attributes. They are: 1) Management direction 2) Relevance 3) Appropriateness 4) Achievement of intended results 5) Acceptance 6) Secondary Impacts 7) Costs and Productivity 8) Responsiveness 9) Financial Results 10) Working environment 11) Protection of Assets 12) Monitoring and Reporting

Conclusion  There is no tried and tested evaluation system that can be applied by most nonprofit organizations to provide a valid picture of how well the organization is performing.  We should focus on trying to improve the dialogue around the evaluation process.  If a prior relationship of trust does not exist before evaluation begins, it must consciously be worked on as the process is developed, particularly those who are to be evaluated.

Conclusion Continued All parties involved (evaluator, evaluates, and stakeholders) should have a voice in deciding the following six questions: 1) What is the purpose of the evaluation? 2) What should be measured? 3) What evaluation methods should be used? 4) What standards or criteria should be applied to the analysis of the information obtained? 5) How should the data be interpreted? 6) How will the evaluation be used?