The Independent Review of the Role of Metrics in Research Assessment and Management was set up in April 2014 to investigate the current and potential future roles that quantitative indicators can play in the assessment and management of research. Its report, ‘The Metric Tide’, was published in July 2015 and is available below.
The review was chaired by James Wilsdon, professor of science and democracy at the University of Sussex, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and research administration. Through 15 months of consultation and evidence-gathering, the review looked in detail at the potential uses and limitations of research metrics and indicators, exploring the use of metrics within institutions and across disciplines.
The main findings of the review include the following:
- There is considerable scepticism among researchers, universities, representative bodies and learned societies about the broader use of metrics in research assessment and management.
- Peer review, despite its flaws, continues to command widespread support as the primary basis for evaluating research outputs, proposals and individuals. However, a significant minority are enthusiastic about greater use of metrics, provided appropriate care is taken.
- Carefully selected indicators can complement decision-making, but a ‘variable geometry’ of expert judgement, quantitative indicators and qualitative measures that respect research diversity will be required.
- There is legitimate concern that some indicators can be misused or ‘gamed’: journal impact factors, university rankings and citation counts being three prominent examples.
- The data infrastructure that underpins the use of metrics and information about research remains fragmented, with insufficient interoperability between systems.
- Analysis concluded that that no metric can currently provide a like-for-like replacement for REF peer review.
- In assessing research outputs in the REF, it is not currently feasible to assess research outputs or impacts in the REF using quantitative indicators alone.
- In assessing impact in the REF, it is not currently feasible to use quantitative indicators in place of narrative case studies. However, there is scope to enhance the use of data in assessing research environments.
The review identified 20 recommendations for further work and action by stakeholders across the UK research system. They propose action in the following areas: supporting the effective leadership, governance and management of research cultures; improving the data infrastructure that supports research information management; increasing the usefulness of existing data and information sources; using metrics in the next REF; and coordinating activity and building evidence.
These recommendations are underpinned by the notion of ‘responsible metrics’ as a way of framing appropriate uses of quantitative indicators in the governance, management and assessment of research. Responsible metrics can be understood in terms of the following dimensions:
- Robustness: basing metrics on the best possible data in terms of accuracy and scope
- Humility: recognising that quantitative evaluation should support – but not supplant – qualitative, expert assessment
- Transparency: keeping data collection and analytical processes open and transparent, so that those being evaluated can test and verify the results
- Diversity: accounting for variation by field, and using a range of indicators to reflect and support a plurality of research and researcher career paths across the system
- Reflexivity: recognising and anticipating the systemic and potential effects of indicators, and updating them in response.