Presentation is loading. Please wait.

Presentation is loading. Please wait.

RESEARCH EVALUATION: chasing indicators of excellence and impact Evidence Thomson Reuters UHMLG Preparing for research assessment, Royal Society of Medicine.

Similar presentations


Presentation on theme: "RESEARCH EVALUATION: chasing indicators of excellence and impact Evidence Thomson Reuters UHMLG Preparing for research assessment, Royal Society of Medicine."— Presentation transcript:

1 RESEARCH EVALUATION: chasing indicators of excellence and impact Evidence Thomson Reuters UHMLG Preparing for research assessment, Royal Society of Medicine JONATHAN ADAMS, Director Research Evaluation 07 MARCH 2011

2 WE HAVE TO RESPOND TO GLOBAL CHALLENGES 2

3 RESEARCH ASSESSMENT PROVIDES US WITH INFORMATION TO DO THAT Global challenges and dynamism Economic turbulence and threats to public resourcing in all sectors Scarce resources –must be distributed selectively –in a manner that is equitable –and maintains academic confidence But what are our criteria? –What is research quality? –What is excellence? –What is impact? 3

4 WE CANNOT DIRECTLY ASSESS WHAT WE WANT TO KNOW 4 Conventionally, this problem is addressed by expert and experienced peer review. Peer review is not without its problems. Peer review of academic research tends to focus on academic impact, so other forms of impact require merit review. Expert review may be opaque to other stakeholders. Objectivity is addressed by introducing quantitative indicators.

5 INDICATORS, NOT METRICS It’s like taking bearings from your yacht 5 A single indicator is not enough Good combinations of indicators take distinctive bearings, or differing perspectives across the research landscape A single indicator is not enough Good combinations of indicators take distinctive bearings, or differing perspectives across the research landscape They are unlikely to agree completely, which gives us an estimate of our uncertainty

6 PRINCIPLES OF QUANTITATIVE RESEACH EVALUATION First, note that there are no absolutes; it’s all relative Impact may be local, national or international; we need benchmarks to make any sense of a number Are the proposed data relevant to the question? Can the available data address the question? What data do we have that we can use... ? 6

7 RESEARCH PERFORMANCE INDICATORS COME FROM THE RESEARCH PROCESS 7

8 WE CAN EXTEND THIS OVER THE WHOLE CYCLE (activities are then not synchronous) 8

9 WE HAVE A WIDE RANGE OF DATA AND POTENTIAL INDICATORS 9 Note that all these data points are characterised by: Location – where the activity took place; Time – when the activity took place; Discipline – the subject matter of the activity All these should be taken into account in evaluation

10 PRINCIPLES OF QUANTITATIVE RESEACH EVALUATION Are the proposed data relevant to the question? Can the available data address the question? Are we comparing ‘like-with-like’? Can we test outcomes by using multiple indicators? Have we ‘normalised’ our data? –Consider relative values, not absolute values Do we understand the characteristics of the data? Are there artefacts in the data that require editing? Do the results appear reasonable? 10

11 HOW CAN WE JUDGE POSSIBLE INDICATORS? Relevant and appropriate –Are indicators correlated with other performance estimates? –Do indicators really distinguish ‘excellence’ as we see it? –Are these the indicators the researchers would use? Cost effective –Data accessibility, coverage, cost and validation Transparent, equitable and stable –Are the characteristics and dynamics of the indicators clear? –Are all institutions, staff and subjects treated equitably? –How do people respond? Can they manipulate indicator outcomes? “Once an indicator is made a target for policy, it starts to lose the information content that initially qualified it to play such a role” (Goodhart’s Law) 11

12 COMMUNITY BEHAVIOUR HAS RESPONDED TO EVALUATION RAE1996 ScienceEngineeringSocial sciencesHumanities and arts Outputs% % % % Books and chapters5, , , , Conference proceedings2, , , , Journal articles77, , , , Other1, , , , RAE2001 Books and chapters1, , , , Conference proceedings , , Journal articles76, , , , Other , , RAE2008 Books and chapters1, , , Conference proceedings2, Journal articles80, , , , Other2, , ,

13 WHY BIBLIOMETRICS ARE A POPULAR SOURCE OF RESEARCH INDICATORS Publication is a universal characteristic of academic research and provides a standard ‘currency’ Citations are a natural part of academic behaviour Citation counts are associated with academic ‘impact’ –Impact is arguably a proxy for quality Data are accessible, affordable and increasingly international – though there is subject imbalance Data characteristics are well understood and widely explored –Citation counts grow over time –Citation behaviour is a cultural characteristic, which varies between fields –Citation behaviour may vary between countries 13

14 CITATION COUNTS GROW OVER TIME AND RATES VARY BETWEEN FIELDS 14

15 PAPERS ARE MORE LIKELY TO BE CITED OVER TIME 15

16 RAW CITATION COUNTS MUST BE ADJUSTED USING A BENCHMARK First, we need to separate articles and reviews Then ‘normalise’ the raw count by using a global reference benchmark Take year of publication into account Take field into account But how do we define field? –Projects funded by a Research Council –Departments which host a group of researchers –Journal set linked by citations –Granularity Physiology – Life science – Natural sciences 16

17 NORMALISED CITATION IMPACT CORRELATES WITH PEER REVIEW (Chemistry data) 17 Methodology affects the detail but not the sense of the outcome

18 THIS IS MOSTLY ABOUT EXCELLENCE: WHAT IS IMPACT? Research excellence might be termed ‘academic impact’ Other forms of impact for which we legitimately may seek evaluation are –Economic impact –Social impact Quantitative research evaluation traces its origins back to the 1980s The DTI spent much money in the 1990s failing to index economic impact It is difficult to track many research innovations through to a new product or process, or vice versa –Links are many-to-many and time-delayed Social impact is difficult to define or capture 18

19 CHASING IMPACT Eugene Garfield originally talked about citation counts as an index of ‘impact’, fifty years ago Current focus on economic and social impact should be seen as a serious engagement with other modes of recognising and demonstrating the value of original and applied research Of course –The objectives are undefined, which undermines any evaluation –It is easier to do this in some subjects than others –Much of the current material is anecdotal –It is difficult to validate without indicators But a start has been made –The principles should follow those of research evaluation –There must be ownership by the disciplinary communities 19

20 RESEARCH EVALUATION: chasing indicators of excellence and impact Evidence Thomson Reuters UHMLG Preparing for research assessment, Royal Society of Medicine JONATHAN ADAMS, Director Research Evaluation 07 MARCH 2011


Download ppt "RESEARCH EVALUATION: chasing indicators of excellence and impact Evidence Thomson Reuters UHMLG Preparing for research assessment, Royal Society of Medicine."

Similar presentations


Ads by Google