Presentation is loading. Please wait.

Presentation is loading. Please wait.

Presentation for DIRECTORATE-GENERAL REGIONAL POLICY

Similar presentations


Presentation on theme: "Presentation for DIRECTORATE-GENERAL REGIONAL POLICY"— Presentation transcript:

1 Cohesion Policy Support to Innovation in Lithuania: Lessons Learnt for Evaluation
Presentation for DIRECTORATE-GENERAL REGIONAL POLICY "EVALUATION NETWORK MEETING" Brussels, 14 April 2011 Agnė Paliokaitė, Senior Policy Researcher, Public Policy and Management Institute, Lithuania When we speak of innovation (or RTDI) policy in Lithuania, we know that similarly to other EU-15 countries, it’s implementation is 99 percent funded from the EU structural assistance. This approach has a strong impact on how the policy is designed, implemented and even – monitored and evaluated.

2 ROADMAP Aims and challenges Approach Conclusions
Added value, strengths and limitations Insights, based on 4 studies carried by PPMI in 2010: - Two system level evaluations for the Knowledge Economy Forum and the Prime Minister’s Office; - ERAWATCH country report; - SF indicators’ system evaluation for the Ministry of Finance (RTDI measures case study). Insights come from a series of small scale qualitative evaluations that the PPMI carried out last year for various clients. These insights are not based on careful modelling or other primary data, they are results of interviews with experts and target groups, meta-analysis of previous studies and analysis of the monitoring and financial data, but they allow looking through the key hole at the national innovation system and what’s going on there.

3 WHY ‘SYSTEM’ EVALUATION?
System (portfolio) evaluation – evaluating policy portfolios, not individual programmes. Retrospective: new political will put innovation high on the political agenda in 2009/2010. New ideas - need for revisiting the incrementally developed policy mix. Prospective: need for rethinking the future priorities in the context of ‘Progress Strategy Lithuania 2030’ and the new Structural Funds period. At lower levels, evaluations tend to deal with actions which implement policies, and therefore place more effort on impacts and effectiveness thank on fundamental question of appropriateness First of all, innovation is an increasingly hot topic on the political agenda – not only at the European level, as many have spoken today, but also on the national scale. Years experienced a strong shift in political will towards giving a priority to the innovation policy. Broad based and horizontal Innovation Strategy was approved last year. While it has it’s flaws, it also marks certain shift in the definition of innovation itself. However, implementation of the Strategy is bounded by the measures that were planned time ago. Similar trends towards creativity, entrepreneurial economy are reflected in the Vision for Lithuania 2030 that was developed by the Progress Council – a roundtable of noted representatives of the society. The main challenge in the forthcoming 2-3 years remains in implementing effectively the policy measures that are planned, while at the same to prepare for the next period and to take into account all the knowledge at the national level as well as insights from the world level theoretical debate

4 AIMS OF SYSTEM EVALUATION
To analyse the extent to which SF funded innovation policy portfolio/mix reflects specific conditions and levels of the National Innovation System (NIS). To analyse how the financial proportions fit to the policy agenda (the preferred ‘routes’). To present preliminary insights on effectiveness in achieving set targets. To draw conclusions on governance & monitoring system. ... without a portfolio manager ... as long budgets keep expanding End of catching up process is in sight Attention may shift again from “how much we spend” to “how we spend” There might be quite some room for increasing the effectiveness of the funding system Mechanical transfer of policy models vs National Innovation System specific solutions

5 EVALUATION FRAMEWORK Relevance (Are we doing the right things?) Hypotheses about bottlenecks Conclusions Hypotheses about bottlenecks Conclusions 1. Innovation system ‘health’: market, capability, institutional, network, system, and governance failures 2. Intervention logic and policy mixes 3. Extent to which outputs and results are achieved, critical factors Innovation Policy and Governance development Effectiveness (Are we doing things right?) First of all, I want to know that in our evaluations we used the systemic failures approach, the assumption being that the innovation policy should respond to the existing gaps in the innovation system. While the cohesion policy is mostly designed as market failure meeting policy, there are also other failures in the NIS – such as networking failures, institutional failures, The insights – or hypotheses – are presented in these broad question areas: relevance (are we doing the right things), effectiveness and efficiency (are we doing things right) Over the last years, European policy-makers have increasingly begun to use the language of ‘innovation systems’1 and to refer, at least implicitly, to ‘system failures’ as a rationale for public sector intervention in innovation activities2. The shift in thinking recognises that the concept of market failure is not a sufficient explanation of why innovation systems under-perform and why governments should intervene. Market failure occurs when market mechanisms are unable to secure long-term investments in innovation due to uncertainty, indivisibility and non-appropriability of innovation process (Arrow 1962). Typically, a market failure manifests itself in an insufficient allocation of funding for risky and innovative investments. In the field of innovation policy, the response to a perceived market failure is generally to provide ‘direct’ funding (grants, etc.) to enterprises in order to lessen the risk of longer-term investments; or providing support for venture capital funds. Apart from the market perspective, analysis of innovation process has to take into account also key deficiencies of companies and failures in systems (Smith 2000, Arnold 2004). Arnold (2004) differentiates four systemic failures: • capability failures - inadequacies in companies’ ability to act in their own best interests, for example through managerial deficits, lack of technological understanding, learning ability or ‘absorptive capacity’; • failure in institutions3 – inadequacies in other relevant NIS actors such as universities, research institutes, patent offices and so on. Rigid disciplinary orientation in universities and consequent inability to adapt to changes in environment is an example of such a failure; • network failures - problems in the interaction among actors in the innovation system such as inadequate amounts and quality of links, ‘transition failures’ and ‘lock-in’ failures (Smith 2000) as well as problems in industry structure such as too intense competition or monopoly; • framework failures – gaps and shortcomings of regulatory frameworks, intellectual property rights, health and safety rules, etc. as well as other background conditions, such as the consumer demand, culture and social values (Smith 2000). Tsipouri, Reid and Miedzinksi (2009) argued that deficiencies in the ‘governance system’ (policy-making, evaluation and learning processes) need to be recognised as a fifth form of system failure, termed ‘policy failures’. Hence, the difference in the capacities and effectiveness of governance in a country can be expected to influence positively or negatively innovation system performance. These failures justify state intervention. A key role of state policymakig is then a bottleneck analysis, being a change agent, developing absorptive capacity and technological development Based on: Arnold E. Evaluating research and innovation policy: a systems world needs systems evaluations, Research Evaluation, volume 13(1), 2004 sdafasdfasdfasdf

6 CHALLENGES AND LIMITATIONS
Timing: low absorption of funds at the time of evaluation (most measures started operation in ). Small scale evaluations.  Hence, inability to apply quantitative approach. A ‘moving object’: innovation policy and governance reform (LIS , SITA); changes in the system of SF objectives. Inability to rely on the system of quantitative indicators . Innovation policy specific: M&E exceptionally difficult for innovation programmes: inherently qualitative and diffuse nature of innovation benefits. Long cause-effect chain. There are both theoretical and practical constraints on what can be done. The complexity of the phenomena mean that we cannot treat them in as much detail when we operate at a smaller scale. From systemic to micro issues Innovation is complex and uncertain; there is no „guarantee“ that public resources can provide innovation. • Innovation can only be reached on the long run, but policy and society ask for shortterm efficiency. • Results of evaluations cannot be compared to each other due to a great variety of methods. • Absence of exact objectives to measure the effects of a program (the verbalisation of objectives needs a consensus between all actors). • Objectives of a program are often political objectives. These are often kept consciously low in order to be achieved anyway. • In spite of progress in the availability of program-concerned data, main indicators are often not available. • Importance of qualitative or quantitative methods. • Disconnection of actual effects and dead-weight. • Innovation causes complex and multiple effects that do not evolve linearly. Their evaluation is in a way delicate and makes a linear impact analysis impossible.

7 QUALITATIVE APPROACH DATA COLLECTION ANALYTICAL TOOLS
Semi structured interview programme with stakeholders and target groups (~30 in total); Desk research: literature review, secondary and administrative data; Expert panels (focus groups); Triangulation principle applied for avoiding subjectivity and partiality of the data as well as guaranteeing impartial conclusions. Assessment of the innovation system and the RTDI policy mix using the ‘system failures’ framework; Logical models and reconstruction of the policy intervention logic; Meta-analysis of previously carried out studies and analysis of trends in the theoretical debate; Data integrating methods: scenarios and road-mapping; Comparative analysis / benchmarking of other countries’ experience; Risk analysis, critical factors and analysis of policy options. Qualitative approach taken due to previously defined limitations

8 RESULTS 1: INTERVENTION LOGIC 2007-2013
Younger researchers More researchers Public R&D infrastructure quality and access to business Higher R&D collaboration between public and private sectors Higher public R&D potential and capacity Higher researchers mobility Higher private sector R&D capacity and potential Higher value added in the economy Higher private R&D investments Better qualified researchers More and better researchers in public sector Better innovation support services Stronger clusters More researchers in business Better private R&D infrastructure More business R&D projects ESF ERDF Logic models are most useful when sketching and planning programmes or projects. • Methodological flaws relate to the lack of understanding of social mechanisms that produce outcomes and understanding of the contextual factors that work under certain circumstances but might not produce the desired effect under others (see e.g. Hedström & Swedberg 1998). • Practical problems have to do with difficulties in finding reliable and valid monitoring and evaluation indicators, aggregating data from outputs to outcomes and long term impacts, proving the attribution and net effect of particular programme interventions and finally utilising evaluation findings in reformulating policies. Policy mix Mostly reflects “linear” innovation model, but demonstrates progress: non-R&D innovations, networking, e-business Typical policy mix for an innovation “follower”, no essential theory of change sdafasdfasdfasdf

9 RESULTS 2: NIS ‘HEALTH’ ASSESSMENT
Market failure (productive sector) Institutional failure (knowledge infrastructure) Capabilities failure Networking failure Framework conditions Governance failure Demand side (absorptive capacity) Experts and previous studies note that the major structural gap is lack of innovators On the other hand, innovation system suffers from both supply and demand gaps. WEAKEST ELEMENTS: framework conditions, innovation support services, entrepreneurship, demand side, governance

10 RESULTS 3: POLICY MIX ‘ROUTES’
Heavily expanding and versatile, but ‘linear’ logic persists’. Mainly follows two routes: (1) to strengthen public R&D base, and (2) to invest in R&D in R&D performing firms. Lack of critical mass to implement some objectives placed high on political agenda (e.g. R&D collaboration). Firms NGOs HEIS, PRIs Strengthening public R&D system Public private R&D collaboration - Investments in private R&D base Investments into productivity “Valleys”, national complex programmes € 678,6m (MoES) Clusters & innovation support services € 92.7m RTDI Networks € 6.23m Direct support to companies, € 732.4m (MoE): Access to capital (€ 415m ) process inovelties (€118.2m) E-business, investments into production technologies Mostly reflects “linear” innovation model, but demonstrates progress: non-R&D innovations, networking, e-business Typical policy mix for an innovation “follower”, no essential theory of change Researchers in business: € 9.3m R&D in business, € 162.2m sdafasdfasdfasdf Source: PPMI, Knowledge Economy Forum, 2010

11 ‘HORIZONTAL’ EVALUATION ‘VERTICAL’ EVALUATION
EVALUATION OF SF MONITORING SYSTEM ‘HORIZONTAL’ EVALUATION ‘VERTICAL’ EVALUATION ~ 1000 indicators ~ 150 indicators Quantitative (statistical analysis) as well as qualitative (logical models and consensus building activities). SMART framework (specific, measurable, achievable, timed..)

12 LEVEL OF ACHIEVEMENT BEFORE 2015
INDICATOR LEVEL OF ACHIEVEMENT BEFORE 2015 REMARKS 1 OBJECTIVE: TO STRENGTHEN PRIVATE AND PUBLIC R&D BASE PRIVATE INVESTMENTS (million EUR) - R LOW OBSTACLES FOR PRIVATE INVESTMENTS, THUS THIS INDICATOR CAN ONLY BE APPLIED AT IMPACT LEVEL R&D CENTRES CREATED AND FUNCTIONAL – R HIGH NO THREATS NUMBER OF R&D BASE DEVELOPMENT PROJECTS– P MEDIUM INDICATOR ACHIEVED WILL BE TWICE LOWER AS PLANNED, HOWEVER THIS DOES NOT REFLECT THE REAL DECREASE OF ALLOCATED RESOURCES (-300 MILLION EUR FROM THE “VALLEYS” TO FINANCIAL ENGINEERING MEASYRES). 2 OBJECTIVE - TO INCREASE PUBLIC SECTOR R&D EFFECTIVENESS AND ACCESSIBILITY TO COMPANIES NUMBER OF GENERAL WORK PLACES CREATED IN THE R&D SECTOR - R NEW MEASURES ARE BEING CREATED NUMBER OF COOPERATION CONTRACTS SEIGNED BETWEEN PUBLIC AND PRIVATE SECTOR INSTITUTIONS- R NUMBER OF R&D PROJECTS- P NEW MEASURES TO ENSURE ACHIEVEMENT OF THE INDICATOR VALUE ARE BEING CREATED 3 OBJECTIVE: TO INCREASE R&D ACTIVITY IN PRIVATE SECTOR NUMBER OF R&D PROJECTS (R&D ACTIVITY IN COMPANIES) – P 4 OBJECTIVE – TO INCREASE BUSINESS AND SCIENCE COLLABORATION, INTENSIFY THE KNOWLEDGE FLOWS NEW BORN TECHNOLOGY INTENSIVE COMPANIES- R THIS IS AN IMPACT LEVEL INDICATOR R&D AND INNOVATION ENVIRONMENT IMPROVEMENT PROJECTS- P Insufficient links between priority indicators and measures; activities funded and measures Separate systems of indicators for accountability purposes (EC) and policy improvement In some cases the ratio of value added and administrative costs unjustified  Quantitative targets will be met in most cases, however it does not mean that qualitative objectives are met. System of indicators does not show real policy value added. Moreover, quantitative indicators do say something about the achievements of the programmes carried out even if far from everything. They at least provide evidence that the projects supported had tangible outputs and produced results which, according to economic theory, can be expected to have contributed to regional development and the pursuit of the wider objectives of cohesion policy.

13 KEY CONCLUSIONS Economic and financial downturn:
Structural gap: Lack of innovation absorptive capacity in business and society; limited local market: the key barrier to knowledge intensive firms. Policy myopia 1: Excessive focus on supply side measures and on ‘supporting the winners’ can be contradictory to the systemic characteristics of NIS Policy myopia 2: Quantitative targets will be met 99 percent, but it does not mean achievement of qualitative objectives. Risk-averse approach to implementation due to limited capacity to evaluate innovation projects. Hypothesis: only a minor part of economy benefits from innovation measures. Financially marginal “soft” measures are important for behavioral additionality: project pipeline building, innovation brokering A strong impact on the definition of policy priorities and what is considered ‘legitimate’ issues may lead to the mechanical transfer of policy models that may not be the most relevant for the CEECs. An outcome: policy myopia - the importance of local problems and the search for local solutions is ignored key lesson from the past: the lack of capacity to design and implement RTD projects innovative firms make more use of innovation policy instruments It is in line with the national policy to invest in existing strengths (“supporting the champions”), and it is in line with the Structural Fund framework regarding absorption capacity, but the line of reasoning is contradictory to the systemic characteristics of NIS Economic and financial downturn: Applicants’ own resources Competition between measures Reallocation of SF resources: €145 m from the “valleys” to financial engineering measures after integration of the Economy Recovery Plan Legal, administrative difficulties (i.e. for private investments in the “valleys”; employment of researchers in firms) Risk-averse approach Overly bureaucratic procedures (most of complaints from beneficiaries). High administrative costs of innovation measures Incapability to evaluate content of the proposal (‘missing the target’?). High administrative costs reduce effectiveness.

14 RECOMMENDATIONS FOR INNOVATIVE POLICY
Governance allowing quality ideas entering the ‘market’: Boosting capability to develop RTDI policy, strengthening project and programme level intelligence; novel approaches to funding; a stronger involvement of users in evaluation and funding. Policy as a discovery process: Promoting innovative, risky, flexible, “bottom-up” approaches; project pipeline building. Empowering people to innovate (‘bottom-up’), and demand side: procurement, regulation, clusters along the value chain, networks around societal problems. Behavioural Additionality is still a rather novel, but already a key topic for evaluations. The concept has enlarged our thinking about the effects of innovation policy to include, more systematically, learning as a key outcome in itself, enabling further, and broader, and more sustainable innovation. The term “behavioural additionality” describes the changes in practices that participation in innovation programmes has induced. Recognises that it is the ways in which the innovation process has been transformed that is most significant (new skills, new contacts, more collaborative work). In the academic literature, the term is understood in at least four different conceptualisations of behavioural additionality, namely, i) as an extension of input additionality, ii) as change in the non-persistent behaviour related to R&D and innovation activities, iii) as change in the persistent behaviour related to R&D and innovation activities and iv) as change in the general conduct of the firm with substantial reference to the building blocks of behaviour. As yet, the applied methodologies do most often not fully capture behavioural additionality. The cases however show that it is possible to differentiate behavioural additionality and define building blocks of behaviour as well as chain of effects. This can be done in a mix of deductive and inductive approaches, with a focus on interaction with the beneficiaries. But there also is a delicate balance between exploring the concept to its full potential through all sorts of differentiation and methodologies on the one hand and pragmatic considerations and limits of absorptive capacity on the other hand. Thus, more experiments with sophisticated methodologies are called for. Those experiments should then enable us to define sets of meaningful, simplified methodologies that are more effective and efficient than the existing approaches, but do not overburden the process. To that end, there seems to be a huge potential in improving monitoring of programmes to use it for evaluations much more thoroughly. But the concept of additionality has more potential for policy evaluation than that, as it is linked with the idea of ‘compensating’ the spillovers of knowledge creation, generated by the special character of knowledge as semi-public good. Knowledge can be used without wearing out and thus contributes to cumulative growth. Dynamic spillovers are in fact the source of productivity gains that are a multiple of the private productivity gains for the private investor. This means that government has an interest to stimulate private R&D because it generates social benefits that go beyond the simple under-investment hypothesis. Additionality has a much broader meaning in this context. Therefore we can use the concept of additionality as a ‘bridging concept’ between the older and the newer models of innovation and government intervention. It has to be extended from the linear model in which ‘input’ and ‘output’ additionality are supposed to be closely correlated to a process oriented non-linear model in which ‘behavioural additionality’ is the key of government policy. Learning takes place in all circumstances, but it is now up to policy evaluation to continuously improve the leverage of policy instruments to increase the learning capabilities on actor level and institutional level and the knowledge distribution capacity of the system. Assessment of the effectiveness of individual instruments (as subsidies) therefore is part of managing innovation systems. Evaluating instruments from the perspective of their (comparative) behavioural additionality is a step towards new system-based evaluation practices, but much work needs still to be done. The evaluation of the effectiveness of innovation policies poses two kinds of specific evaluation challenges: the assessment of the behavioural additionality of instruments on the interactions in the system, and the assessment of the behavioural additionality on social return. This is because we consider the mutual reference of knowledge production (creation) and knowledge consumption (diffusion and use) as being the main engine of cumulative growth of the knowledge economy The first item – interaction additionality - concerns the opening of the ‘black box’ of the interaction between agency and firms, in particular the effects of instruments on the decision processes and private strategies about R&D (one-time and enduring effects). The different types of behavioural effects of the instrument need to be identified and assessed for their learning leverage. This will preferably be in quantitative form in order to be related to innovation performance. But this evaluation finds also an end in itself, supporting policy design to improve the additionality enhancing qualities of the questioned instrument and the policy mix. How this can be done is by institutional learning in the form of real life ‘experiments’ and studies on the basis of hypothesis testing (thought experiments in policy debate or econometric tests). The use of the control group and of appropriate ways of ‘revealing’ the ‘true’ behaviour are important items for this assessment.

15 STRENGTH AND LIMITS OF APPROACH
Strengths: Focus on the NIS bottlenecks as opposed to the mechanical transfer of policy models that may not be the most relevant for the NIS. Allows for internal coherence and looking beyond the quantitative input/output indicators. From macro to micro level analysis (focus on important details). Limitations of qualitative approach: lack of ‘hard’ data and evidence (e.g. as opposed to counterfactual analysis) for tracing the real change and explaining obtained effects. Object for the following evaluations. Recommendation for following evaluations: look for behavioural additionality (knowledge spillovers, changes in innovation process related behavioural patterns, interaction additionality, etc.), quantifying impact of networks Additionality is a key concept for the evaluation of the ‘effectiveness’ of policy instruments for stimulating R&D and innovation. It is historically linked with the conceptual framework of ‘market failure’ as a rationale for government intervention in R&D or knowledge creation and diffusion in general. It has a theoretical foundation in ‘welfare economics’: when the ‘optimal’ level of R&D investments is not attained, public incentives can ‘make a difference’. Because the social optimal level of R&D cannot be calculated, evaluations try to measure if government incentives (subsidies) are or are not crowding out private investments in R&D. This is done without referring to an optimal level, but taking the existing market conditions as given. These evaluations are rather narrow: additionality equals ‘not substituting’, a negative motivation, and are only interested in the financial effects. But the concept of additionality has more potential for policy evaluation than that, as it is linked with the idea of ‘compensating’ the spillovers of knowledge creation, generated by the special character of knowledge as semi-public good. Knowledge can be used without wearing out and thus contributes to cumulative growth. Dynamic spillovers are in fact the source of productivity gains that are a multiple of the private productivity gains for the private investor. This means that government has an interest to stimulate private R&D because it generates social benefits that go beyond the simple under-investment hypothesis. Additionality has a much broader meaning in this context. Here we have to extend the reference framework of additionality to the conceptual model of the innovation system, because this is grounded in the understanding of innovation as an interactive and cumulative process. The existence of ‘system failures’ provides a rationale for policy to ‘make a difference’ in unleashing the synergy potential of innovation systems that are not performing well because of unbalances, lock-ins, weak connectivity and other systemic dysfunctions that are related to the interaction pattern of actors. Therefore we can use the concept of additionality as a ‘bridging concept’ between the older and the newer models of innovation and government intervention. It has to be extended from the linear model in which ‘input’ and ‘output’ additionality are supposed to be closely correlated to a process oriented non-linear model in which ‘behavioural additionality’ is the key of government policy. Learning takes place in all circumstances, but it is now up to policy evaluation to continuously improve the leverage of policy instruments to increase the learning capabilities on actor level and institutional level and the knowledge distribution capacity of the system. Assessment of the effectiveness of individual instruments (as subsidies) therefore is part of managing innovation systems. Evaluating instruments from the perspective of their (comparative) behavioural additionality is a step towards new system-based evaluation practices, but much work needs still to be done. This paper discusses the challenges.

16 THANK YOU FOR ATTENTION!
Agnė Paliokaitė Senior Policy Researcher Public Policy and Management Institute


Download ppt "Presentation for DIRECTORATE-GENERAL REGIONAL POLICY"

Similar presentations


Ads by Google