Presentation is loading. Please wait.

Presentation is loading. Please wait.

Catherine Searle Renault, Ph.D. RTI International

Similar presentations


Presentation on theme: "Catherine Searle Renault, Ph.D. RTI International"— Presentation transcript:

1 Defining and Measuring Success in Technology-based Economic Development
Catherine Searle Renault, Ph.D. RTI International Center for Technology Applications Science, Technology and Economic Growth: A Practicum for States March 23, 2004

2 Overview Principles of Evaluation and Measurement
Theory of Technology-based Economic Development Indicators Ways to Collect Data Example – Evaluation of Maine’s Public Investment in R&D Challenges and Opportunities Lessons Learned Assuming that universities want to increase their technology transfer productivity, these results suggest that open discussion and more information on campus, among both faculty and graduate students, especially in engineering and the life sciences about the issues surrounding academic capitalism would be helpful. There is a great deal of mis-information out there, and increased knowledge would improve the level of the conversation, if nothing else. More importantly, however, is the issue of mixed messages. If the universities want to further their technology transfer missions, they also need to reflect this in their tenure and promotion as well as conflict of interest policies.

3 Why Measure and Evaluate?
Evaluation is the collection, analysis, interpretation and communication about the effectiveness of programs undertaken for the public good. Aids in decisions about whether program should be expanded, continued, improved or curtailed Increase the effectiveness of program management To satisfy calls for accountability To measure the impact on the core problem

4 Key Concepts Evaluation is a process, not an event.
Evaluation is for practical use, not to sit on the shelf. The questions to be answered are derived from the program itself. Compares “What is” with “What would have been” and “What should be” Takes place in a setting where work/programs are ongoing.

5 Definitions Program, e.g. The Advanced Technology Program, a complete initiative Project, one interaction with a client, e.g. a single ATP award. Input: resources used to produce outputs and outcomes. Outputs: Products and services delivered. The completed products of internal activities. Outcomes: An event, occurrence or condition that is outside the program itself that is important. Intermediate outcomes: important outcomes, but not the end in itself. End outcome: The sought after result

6 General Logic Model INTERMEDIATE OUTCOMES END OUTCOMES INPUTS OUTPUTS
Personnel Facilities Funding Activities Clients Served Awards Made Mid-Point Events Ending Events

7 Definitions Measurement/Monitoring Impact Measurement
What are the outcomes of the program/project? Impact Measurement Calculate economic impact of outcomes Program/Project Evaluation Involves causality

8 Causality and Attribution
To prove causality, you need four conditions: That the outcomes exist That the inputs precede the outcomes That the inputs, outputs and outcomes are related That all other explanations are accounted for. Attribution is weaker, but easier to prove. Clients say (attribute) their results to the program/project.

9 Principles for Evaluation
If more than one program, establish consistent approach for all programs Ensure clear articulation of goals in as concrete terms as possible Be as rigorous as possible in design and analysis to increase validity and credibility, but make tradeoffs reflecting operational issues Gain evaluation at state level as well as data for individual program management

10 Program Theory-based Evaluation
Use the theory behind the intervention to design appropriate indicators of intermediate and end outcomes. Identify the goals and objectives of the program Construct a model of what the program is supposed to accomplish Collect data to compare: Goals Actual observed outcomes What would have happened otherwise, i.e. without intervention Analyze and interpret results

11 Goals and Objectives of Technology-based Economic Development
Improve citizens’ quality of life by: Creating and retaining high quality jobs (defined as higher pay), generally in technology-based businesses Creating and retaining (and in some cases, recruiting) high quality companies, defined as high growth, high paying), generally in technology-based industries Improving the stability and/or competitiveness of local and regional economy through innovation

12 Logic Model for Technology-Based Economic Development
Joint Research; Students Government Funding Competition Government R&D Grants Workforce RESEARCH INSTITUTIONS TECHNOLOGY TRANSFER OFFICE R & D Driven Industry Innovation Economy Basic Research Applied Research Market Debt & Equity Funding Cost of Doing Business Develop Marketing Opportunity Foundation Funding Research Funding

13 Product/Company Life Cycle Model
Basic Research Applied Research Product Launch Enhance Product Product Maturity

14 Interventions to Build an Innovation Economy
Build Research Capacity Company Basic/Applied Research Design for Manufacturing Product Launch Enhance Product Product Maturity Technical Assistance Centers of Excellence Advanced Manufacturing Centers MEP Sea Grant CREES Business Assistance Incubators, Business Development; Science Parks Funding EPSCOR Federal Funding ATP SBIR STTR State Research Grants State - Sponsored Seed Funds SBA Loans

15 Intermediate Indicators
Researchers S&E graduate students Federal R&D grants R&D expenditures Patents Publications New Sponsored R&D with local companies Companies Patents Venture capital raised SBIR and STTRs won Other federal programs (ATP) won M&A activity IPO activity

16 End Outcome Indicators
Average annual earnings of employees Number of high-technology companies in the state/region Number of scientists and engineers employed in the state/region Number of company births, especially high-technology Percent of revenue from outside state Revenue per employee (productivity)

17 Collecting Data Three possible methods; use one or all:
Annual survey of all recipients of (all) programs Use with control group to assess causality Potentially split companies and research institutions Indicator data for states and benchmark states to assess changes in competitiveness. Case studies to understand detailed trends

18 Key Decisions for Annual Survey
Who to survey: universe of companies and researchers; develop single list; sample or all? What is unit of analysis? Company? Project? How frequently: annually? keep respondents on list for 5 years When to survey: July-August good match for government reporting, poor for companies When to analyze data and report: driven by state budget cycles What methods to use: develop innovative and low cost methods to collect data– mail and web How to assess causality: establish a control group for statistical comparison purposes

19 Issues for Indicator Analysis
Linkage with Innovation Index activities Same or related indicators? Degree of analysis Sources that will be consistent over time Data availability – not always available by states, region, county or locality Timeliness of data What are appropriate comparison states/regions?

20 Decisions to Make about Case Studies
How to chose which ones to do Who to interview? We suggest Program Managers, clients, other stakeholders, e.g. Board members, trade associations, related programs. How to ensure reliability and replicability of data Protocol based on indicators Maintain database Consistency of process

21 Analysis and Interpretation
Be descriptive Note trends Especially useful to note benchmark year, e.g. before beginning of program Norm by population, gross state product, etc. Graph for easy interpretation Acknowledge limitations of data, research design

22 Example – Maine Evaluation of Public Investments in R&D
Context of Evaluation Maine has substantially increased its ongoing investments in R&D starting in 1996 Evaluation legislatively mandated, funded by “tax” on R&D investments Required outside experts to perform the evaluation of all public R&D investments. An ongoing process initial evaluation and process design 2001 annual data collection Five year evaluation in 2006

23 The Three Questions 1. How competitive is Maine’s sponsored R&D and has it improved over time? 2. What is the impact of Maine’s R&D investment on the development of Maine’s R&D industry? 3. What is the impact of Maine’s R&D investment on the level of innovation and innovation-based economic development?

24 Results Reported for 2003 How competitive is Maine’s sponsored R&D and has it improved over time? Maine started from a lagging position and is making some gains … but generally just keeping up since other states are also investing heavily. Maine appears to be gaining on other EPSCoR states in SBIR/STTR awards and in venture capital investments.

25 Results Reported for 2003 What is the impact of Maine’s R&D investment on the development of Maine’s R&D industry? Maine made sizeable investments in research capacity in the late 1990s and the intermediate outcomes are evident: more faculty, more research equipment and facilities, more proposals submitted, more publications. However, there is little change in intellectual property and joint research with industry or commercial outcomes.

26 Results Reported for 2003 What is the impact of Maine’s R&D investment on the level of innovation and innovation-based economic development? The state’s R&D investments are reaching the appropriate targets – the clients are overwhelmingly small, R&D companies (less than 10 employees, revenues less than $1 million, less than five years old). The companies are reporting better than average results in employment growth, revenue growth, per capita income, productivity. We detect many elements of causality for gains in SBIR/STTR, intellectual property and venture capital investments.

27 Challenges and Opportunities
Faces at the table change constantly over six-year period General distrust of evaluation process Many programs don’t keep good records and/or contacts with past clients Research design for Technology-based Economic Development challenging because of long lead times for outcomes to develop, difficulty in assessing causality and lack of good measures for innovation per se

28 Lessons Learned Doing evaluation correctly is not cheap … Surveying, in particular, is time consuming. Excellent tool for program management; less effective, but may be required, for accountability Credibility is linked to your program’s overall positioning; a good evaluation can help, but not necessarily. A bad evaluation is a bad evaluation. The work we are doing in technology-based economic development pays off in the mid- to long-run.


Download ppt "Catherine Searle Renault, Ph.D. RTI International"

Similar presentations


Ads by Google