Presentation is loading. Please wait.

Presentation is loading. Please wait.

RAE Review: Interim feedback from facilitated workshops Jonathan Grant and Steven Wooding.

Similar presentations


Presentation on theme: "RAE Review: Interim feedback from facilitated workshops Jonathan Grant and Steven Wooding."— Presentation transcript:

1 RAE Review: Interim feedback from facilitated workshops Jonathan Grant and Steven Wooding

2 Plan n Introduction n Who we are and what we do n Where we went and who we spoke to n What we did and why we did it n Results from 6/9 workshops n Task 1: What is quality research? How should it be assessed? n Task 2: Assessing model systems n Task 3: Building a better system n Task 4: Implementation and implications n Some emerging observations n Preference for expert review, but needs refining n Increased transparency and clarity of process n Tension between comparability and appropriateness n Need structures to support change

3 Who we are and what do we do n RAND Europe n Independent not-for-profit public policy think-tank n Cousin of RAND; US based independent think-tank employing 1600 researchers n RAND Europe. Established in Leiden (NL) in 1992; Cambridge in 2001 n In UK programmes include Transport, Information Society, Health and R&D Policy n Current projects include VFM study on government department research for NAO and on scientific mobility for WHO

4 Where we went and who we spoke to n Based on 6/9 Workshops n 93 people Positions n 26 Administrators n 38 Senior academics n 13 Academics n 8 Research Fellows n 8 Unclassified Fields n 32 Medicine, science & engineering n 16 Social Science n 15 Arts and Humanities n 29 Not research active n 1 Unclassified * Excludes Cambridge, Reading & Belfast (n=c50)

5 What we did and why we did it n Facilitated workshops n Provide framework for structured thinking n Captured outputs in a standard and comparable form n Allows comparison between mixed and like groups of people (e.g., administrators only vs. mix of HoDs, fellows, and research officers) n Purpose is to listen, not evaluate

6 Agenda n Task 1: What is quality research? How should it be assessed? n Task 2: Assessing model systems n Task 3: Building a better system n Task 4: Implementation and implications

7 Task 1: What is high quality research? How should it be assessed? n Purpose n Stimulate wide ranging thinking on the most important aspects marking out high quality research and research assessment systems n Task 1 n Introductions n Identify 5 characteristics of high quality research n Identify 5 characteristics of research assessment system n Vote (5 votes for each; allocated as seen fit)

8 Top 10 characteristics of high quality research

9 Top 10 characteristics of research assessment systems

10 Task 2: Assessing models systems n Purpose n Evaluate the strengths and weaknesses of 4 model systems: Expert review; Algorithms/metrics; Self- assessment; and, Historical Ratings n Task 2 n Split into 4 groups of c4-5 n 2 groups look at good aspects of 2 systems each n 2 groups look at bad aspects of 2 systems each n Also identify questions that need answering

11 Algorithms / metrics Selected questions n How to recognise novelty? n What do you count? n How to ensure comparability? Good features n Transparent (4) n Objective (3) n Cheap (3) n Simple (2) Bad features n Not suitable for all subjects (5) n Spurious objectivity (3) n Open to game playing (2) n Metrics are proxy measures (2)

12 Expert review Questions n Who are the experts and how are they selected? n How do you build in an appeals mechanism n How do you recognise innovation? Good features n Acceptable to community (5) n Based on specialised knowledge (3) Bad features n Not comprehensive (3) n Not transparent (3) n Perceived bias (3) n In consistent (2) n Expensive (2)

13 Historical ratings Questions n How do you take account of changing performance? n Who makes the judgement? (HEI or individual) n How far back? Good features n Light touch (3) n Cheap (2) n Ability to plan (2) Bad features n Inhibits change (3) n Low credibility (3) n Perpetuates silos (2)

14 Self-assessment Questions n How would you police it? n Who sets the goals? n How do you penalise inflated results? Good features n Sensitive to discipline (2) n Formative – considers self (2) n Ownership & Trust (2) Bad features n No cross discipline comparability (3) n Open to game playing (3) n No confidence in system (2) n Effort could be large (2)

15 Task 3: Building a better system n Purpose n Design ideal research assessment system n Task 3 n Split in to 3 or 4 different groups of c4-7 n Select seed system from Task 2 n Build on this using aspects of other system n Present back to plenary

16 Identifying a base system n Starting point n Expert review (16/18 breakout groups) n Self-assessment (1/18 breakout groups) n 1 failed (to reach a decision) n (remember based on 6/9 workshops)

17 Refining the expert system n Transparency and clarity of process n Establish rules at outset n Early (i.e., up to 5 years before assessment) n Dont change rules during process n Provide feedback n Legal contract between FCs and HEIs (rules wont change, in return for guarantee of no challenge) n Clarity of funding outcome

18 Refining the expert system n Longer time period between assessments n Review every 8-10 years n Triggering mechanisms for interim review at 4-5 years. n Self declaration for new or emerging areas n Metrics for decline areas n Sampling (selective or random) with other departments

19 Refining the expert system n Broader panels n Mirror Research Councils/AHRB n Broad or supra-panels have broad freedom to establish discipline specific rules n Sub-panel operate to those rules, reporting to supra-panel n Aims to solve tension between comparability and appropriateness

20 Refining the expert system n Continuous rating scale n More grades n Summation of individuals scores n Possibly based on ranking n Continuous funding scale n No (or reduced) step changes in funding

21 Refining the expert system n Conceptual (non subject) AoAs n Based on research outputs n Formal sciences Theorems n Explanatory sciences Laws n Design sciences Technological rules n Human sciences Artifacts and knowledge n Users from relevant AoA

22 Refining the expert system n Lay Panel members n Possible experience lay members (such as judges) as panel chair n Rolling review n One discipline every year n Transfer fees n Reward departments that nurture future high flyers n Return all staff

23 Task 4: Implementation and implications n Purpose n Examine system would be put into practice and evaluate repercussions n Task 4 n Working in same groups with devils advocate n Identify 5 steps for implementation n Identify 5 things that could go wrong with the system n Identify 5 changes to the UK research as a result of the new research assessment system

24 Implication of change n Some changes to UK research: n Ensure research capacity is activated wherever it is found n Better support and funding for younger researchers n Good support for emerging areas of research n Equal opportunities are improved n Better recognition of interdisciplinary research n Funding directly following excellence n Less obsession with the RAE

25 Emerging observations n Preference for expert review, but needs refining n Increased transparency and clarity of process n Tension between comparability and appropriateness n Need structures to support change But …. n Excludes Cambridge, Reading and Belfast n Not analysed by discipline and or profession


Download ppt "RAE Review: Interim feedback from facilitated workshops Jonathan Grant and Steven Wooding."

Similar presentations


Ads by Google