Overview of evaluation of SME policy – Why and How.

Slides:



Advertisements
Similar presentations
Focusing an Evaluation Ben Silliman, Youth Development Specialist NC 4-H Youth Development.
Advertisements

Impact analysis and counterfactuals in practise: the case of Structural Funds support for enterprise Gerhard Untiedt GEFRA-Münster,Germany Conference:
Collecting and Analysing Data Chris Dayson Research Fellow Presentation to: Involve/CRESR Social Impact Masterclass 26th September 2013.
Donald T. Simeon Caribbean Health Research Council
OECD/INFE High-level Principles for the evaluation of financial education programmes Adele Atkinson, PhD OECD With the support of the Russian/World Bank/OECD.
Despite any initial worries we may have had about having a personal budget, ultimately we all believe that it really has been worth it for the difference.
Good Evaluation Planning – and why this matters Presentation by Elliot Stern to Evaluation Network Meeting January 16 th 2015.
Project Monitoring Evaluation and Assessment
Alvin Kwan Division of Information & Technology Studies
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Latest Trends in Evaluation: Interviews with Industry Leaders Don Snodgrass and Zan Northrip October 2, 2008 DAI.
Evaluation is a professional and ethical responsibility and is a core part of PHN professional practice Commitment to evaluation helps build the PHN intelligence.
Types of Evaluation.
Wenxin Zhang Department of Civic Design University of Liverpool
OECD Short-Term Economic Statistics Working PartyJune Analysis of revisions for short-term economic statistics Richard McKenzie OECD OECD Short.
Human capital management
Jumping on the Funded Research Bandwagon Paul O’Reilly Dublin Institute of Technology Presentation to Faculty of Commerce and Centre for Innovation and.
Evaluation of Math-Science Partnership Projects (or how to find out if you’re really getting your money’s worth)
Outputs and Outcomes Building Better Opportunities Neil King - Director – CERT Ltd.
RESEARCH A systematic quest for undiscovered truth A way of thinking
Designing an Evaluation of the Effectiveness of NIH’s Extramural Loan Repayment Programs.
Public Value Innovation and Research Evaluation Discussion by Karen Macours INRA - Paris School of Economics.
Lesson 3: Monitoring and Indicator Macerata, 23 nd October Alessandro Valenza, Director, t33 srl.
Copyright © 2010 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Experiments and Observational Studies. Observational Studies In an observational study, researchers don’t assign choices; they simply observe them. look.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 13 Experiments and Observational Studies.
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Non-Experimental Methods Florence Kondylis.
Designing a Random Assignment Social Experiment In the U.K.; The Employment Retention and Advancement Demonstration (ERA)
Expert mission on the Evaluation of SMEs and Entrepreneurship Policies and Programmes Resources pack Jerusalem and Neve Ilan, Israel th March 2014.
Semester 2: Lecture 9 Analyzing Qualitative Data: Evaluation Research Prepared by: Dr. Lloyd Waller ©
OECD/INFE Tools for evaluating financial education programmes Adele Atkinson, PhD Policy Analyst OECD With the support of the Russian/World Bank/OECD Trust.
Environmental Business Support in the UK : Providing Added Value to Business Progress Towards Sustainability? Frances Hines BRASS Cardiff University.
QUICK OVERVIEW - REDUCING REOFFENDING EVALUATION PACK INTRODUCTION TO A 4 STEP EVALUATION APPROACH.
1 Analysing the contributions of fellowships to industrial development November 2010 Johannes Dobinger, UNIDO Evaluation Group.
Evaluating HRD Programs
Faculty of Economics and Business Administration Page 1 Discussion on: Estimating the impact of a policy reform on welfare participation:
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14.
Quantitative SOTL Research Methods Krista Trinder, College of Medicine Brad Wuetherick, GMCTE October 28, 2010.
Beyond surveys: the research frontier moves to the use of administrative data to evaluate R&D grants Oliver Herrmann Ministry of Business, Innovation.
Workshop Development Impact Evaluation in Finance and Private Sector Dakar February 2010 With generous support from Gender Action Plan South Africa Competitiveness.
For ABA Importance of Individual Subjects Enables applied behavior analysts to discover and refine effective interventions for socially significant behaviors.
SURVEY RESEARCH.  Purposes and general principles Survey research as a general approach for collecting descriptive data Surveys as data collection methods.
Project Cycle Management for International Development Cooperation Indicators Teacher Pietro Celotti Università degli Studi di Macerata 16 December 2011.
CHAPTER 12 Descriptive, Program Evaluation, and Advanced Methods.
Market research for a start-up. LEARNING OUTCOMES By the end of this lesson I will be able to: –Define and explain market research –Distinguish between.
Evaluation of the Norwegian SkatteFUNN scheme Torbjørn Hægeland, Statistics Norway Jan 22, 2004.
Applying impact evaluation tools A hypothetical fertilizer project.
Marketing Chapter Overview The meaning of market research The difference between primary and secondary market research Method of gathering information.
SOCIAL SCIENCE RESEARCH METHODS. The Scientific Method  Need a set of procedures that show not only how findings have been arrived at but are also clear.
Kathy Corbiere Service Delivery and Performance Commission
1 Support to enterprise – a counterfactual approach Daniel Mouqué Evaluation Unit, DG REGIO Ex post evaluation – WP 6c.
Chapter Eight: Quantitative Methods
Expert mission on the Evaluation of SMEs and Entrepreneurship Policies and Programmes Resources pack Jerusalem and Neve Ilan, Israel th March 2014.
Chile’s Supplier Development Program Irani Arráiz May 2015.
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
Introduction Extensive Experience of ex-post evaluation of national support programmes for innovation; less experience at regional level; Paper aims to.
IMPLEMENTATION AND PROCESS EVALUATION PBAF 526. Today: Recap last week Next week: Bring in picture with program theory and evaluation questions Partners?
Methodological Issues Themes in Psychology. Snapshot Study Snapshot study: takes place at just one point in time, potentially with one participant for.
Monitoring and evaluation 16 July 2009 Michael Samson UNICEF/ IDS Course on Social Protection.
Experimental Design Ragu, Nickola, Marina, & Shannon.
Looking for statistical twins
Part Two.
Quasi Experimental Methods I
Community program evaluation school
Impact Evaluation Terms Of Reference
Employment and Social Affairs Platform
ESF EVALUATION PARTNERSHIP MEETING Bernhard Boockmann / Helmut Apel
Development Impact Evaluation in Finance and Private Sector
Evaluating Impacts: An Overview of Quantitative Methods
Estimating net impacts of the European Social Fund in England
Presentation transcript:

Overview of evaluation of SME policy – Why and How

Why evaluate? Evaluation provides – Valuable evidence for developing schemes to make the most of any investment – It provides a validation of the benefits of a scheme and its value – good news – It provides hard evidence which can be used to make a case for continuing or developing a scheme in budget discussions – It enables inter-scheme, inter-regional and international comparisons

Types of SME policy evaluation TypeTitleCommentsData needed 1 Measuring take-up Provides an indication of popularity but no real idea of outcomes or impact Scheme management data 2 Recipient evaluation Popularity and recipients' idea of the usefulness or value of the scheme. Very often informal or unstructured.Recipient data 3 Subjective assessment Subjective assessments of the scheme by recipients. Often categorical, less frequently numericRecipient data 4 Control group Impacts measured relative to a 'typical' control group of potential recipients of the scheme Recipient and control group data 5 Matched control group Impacts measured relative to a 'matched' control group similar to recipients in terms of some characteristics (e.g. size, sector) Recipient and control group data 6 Econometric studies Impacts estimated using multivariate econometric or statistical approaches and allowing for sample selection Survey data - recipients and non- recipients 7 Experimental Approaches Impacts estimated using random allocation to treatment and control groups or alternative treatment groups Control and treatment group data

Type 2 – Quantitative monitoring Aim to profile operational aspects of the scheme and provide firm list for later impact analysis Key questions: – Who applied for the scheme? – How many were funded? Went ahead? – How long did approval take? – Were all the funds allocated? Responsibility of delivering agent to collect and data management should be part of the service contract Key evaluation failure is lack of good administrative data on scheme

Type 3 – Qualitative monitoring Aim to interpret and analyse admin data plus perhaps some ‘key informant’ interviews. Possible questions: – Did projects finish as expected? – Did support go to the expected types of firms? – Where are these firms located? What industries? – Were applications processed fast enough? Might be seen as ‘interim evaluation’ and best if independently done

Types 4 and 5 – Control group comparisons Compares performance of recipients and a group of similar firms/individuals matched on some quantitative dimension, e.g. growth, exporting etc. Difference in performance is said to be due to the effect of the scheme. Data could be survey or business register type data. Advantages – Relatively simple methodology to apply – Provides ‘hard’ data on impacts not self-assessment Disadvantages – Often difficult to construct relevant control group – Requires information on participants and controls – Costly sometimes – Does not control for self selection into recipient group (see example below) Operational issues – Analysis should be undertaken independently commissioned by sponsoring department – Key issue is design of control group and needs careful thought

Type 6 – Econometric approaches Typically based on a survey of recipients and non-recipients. How did performance changes in recipients compare to that of similar firms allowing for firm characteristics and selection bias? Uses regression models to identify policy impact on performance controlling for selection bias Advantages – Seen as ‘best practice’ methodology – Can control for firm/individual characteristics – Can control for selection bias Disadvantages – Costly, complicated and difficult to understand Operational aspects – Analysis should be undertaken independently commissioned by sponsoring department – Requires survey of recipients and non-recipients so costs at least double self-assessment (Type 1) – Telephone interviews can be very cost effective (CATI)

Comments on the LDC schemes 1/6

KORET Impressive scheme, key effects social+economic value for assisted women Level 2 evaluation: self-assessment before and after – appropriate approach, since appropriate control group diffucult to identify and approach – counterfactual is status quo – little added value in control group approach Information from existing evaluation could be used to develop NPV-type evaluation (scheme value) But this would underestimate the programme‘s effect – strong social impact and local multipliers Consider strategic added value

Business Centres 2 alternative treatments: – virtual incubator (networking) – physical incubator Compare relative growth of two groups of entrepreneurs, control for firm capabilities Small samples suggest to analyse all 5 centres together Compare contribution of different types of support activities of LDCs Complex evaluation with external control group not worth doing Outcome: also consider investment

Initiating a Business Current evaluation is a good approach – matched control group: applicants decided not to go forward (level 4 evaluation) – better controlling for selection bias and unobserved characteristics relevant for decision to start a business, and the business‘ success – collect a bit more information on the applicants aspiration qualification/skills family/community background prior employment record – use selection-control model (level 6 evaluation) – outcome variables: survival, profitability, enter employment – impact period: 18 or 24 months

Ex-Post Evaluation of the Consulting Programme? not clear whether it is worth conducting an evaluation difficult to identify a sensible control group because of focus on firms in crisis Option 1: Effects of alternative treatments – exploit existing monitoring data – estimate effects of different types of consulting support while controlling for firms‘ initial status (= result of diagnosis?) Option 2: Test on performance specifities of target group: – identify sector/size/age/regional distribution of firms receiving consulting – compare performance – differentiate by the type of consulting support provided

Evaluating the Consultancy Programme Develop Logic Models for each initiative: – output of funding activities – short-term, medium-term, long-term outcome – contingency factors Etablish a integrated monitoring system – firms approaching MAOF: motivation, source of information – results of diagnosis – surveying firms on attitudes, behaviour, output at three points in time: at diagnosis, post-consultancy, after 2 years (potentially as part of another diagnosis) – put all into a database

Early, formative evaluation – sustainability effects of the new 2-year approach – effects on attitudes and behaviour of different types of consultancy Impact evaluation – establish a control group: match several „twins“ for each funded firm (e.g. from D&B, business register) – short surveys of control group at beginning and end of 2-year period (past performance, information about MAOF) – displacement and multiplier effects (through second control group survey) – net effect: NPV based on employment differences to control group (survival and growth) Evaluating the Consultancy Programme

Now something speculative... Randomisation of allocating consulting support a) firms can choose between – consulting service based on diagnosis (and having to pay a part of consulting costs) – getting a consulting service assigned randomly (and not having to pay) b) some firms get a second consultancy – firms may choose the subject – they will have to pay Evaluating the Consultancy Programme