Looking back at your own research experience, where have you come up against metrics?
I joined academia from working in industry for a short time post-PhD. Metrics were undoubtedly used in industry, yet I worked for a Norwegian company in Norway, who valued more informal, personal, dialogue-driven assessments of my contributions. Also, many of the things we did in industry, such as the teamwork required to get a job completed, were hard to measure with cold, hard numbers.
On joining academia, I became vaguely aware of the use of metrics in research assessment, not only in terms of personal assessment in my yearly appraisal, but also at the institutional level in the form of RAE (now REF). Fortunately, I had mentors who instilled in me a basic joy for the act of research, without thinking too hard about where we publish it and/or how it would be measured.
This has stayed with me until now, as I have progressed through various academic stages and as I’ve become increasingly aware of the use, abuse, and impact of metrics. I should also say I joined academia in 2004, at a time when metrics were not used so intensely and indiscriminately. For example, I had one published paper to my name, I had raised no grant income, I had 2.5 years in industry, and I had, I’m assuming, a decent reference or two. There is no way I’d get a job now.
Much of my research has been and continues to be funded by industry; as far as I am aware, they do not assess our work using metrics, but instead informally evaluate the usefulness of it, and how it impacts their bottom line or internal work practice. Furthermore, many journals we publish in have relatively low journal impact factors (i.e. <5); Science and Nature are not on our radar, so we typically concern ourselves solely with publishing in journals that are most suitable for the research we undertake.
This was initially a major concern for me going forward for promotion, being surrounded by colleagues within and outside my immediate department who regularly published in such journals. How was I to compete? Fortunately journal impact factors do not drive everything – research income, measures of esteem and broader contributions to the community are also valued.
Such a ‘portfolio-based’ approach to assessing research – which can include a so-called ‘biosketch’ penned by the appraisee and expressing a view of their key contributions – is, I think, a fairer way of doing things. It certainly benefited me because, like many people, I was strong in some areas and weak in others.
The event included a panel discussion on the Researcher’s Perspective. What did you think of the session?
The comments of Kyra Sedransk Campbell, the Royal Society/EPSRC Dorothy Hodgkin Research Fellow at Imperial College, have stuck with me. She rightly called for institutions to be honest with candidates about what they want and how they will be measured; in essence, transparency builds trust.
She also indicated that, although the pressure on early career researchers is often intense, there may be relatively little support, which can lead to poor mental health, a problem that extends across all levels of academia.
Penny Andrews, PhD at the University of Sheffield and Post-Doctoral Researcher at the University of Leeds, made me think about how metrics suffer when attempting to ‘measure’ the achievements of people who have had less-than-conventional career paths, or whose research outputs are somewhat different (but equally as important and high quality) as those typically highlighted on job adverts or in promotion documentation. She stressed that actually talking to human beings, rather than simply looking at their documentation and ‘numbers’, is critical.
What one thing did you take away from the event?
That senior academics like me have to help lead the change, but that engagement from early career researchers will be hugely beneficial. We need to listen to them and to recall our own positive and negative experiences, to help make progress.
Oh, and if I can have one more take-away point, it would relate to transparency of process. For example, for people to have faith in a system of assessment, targets should be clear. To this end, why not simply make documentation related to successful promotion or hiring applications public? They could of course have sensitive material redacted, but they would still give people a sense as to what a ‘successful portfolio’ looks like.
Anything else you want to reflect on following the event?
We can sometimes feel that we exist within an echo chamber, especially when limited to discussions on social media or with colleagues in your own institutions. At the event I met and saw lots of people, of different ages, with different roles, in different institutions, who all saw value in trying to develop a more responsible approach to the use of metrics. Because of this, the event was hugely inspiring, and I only wish more academics had been there, to feel the positive messages coming not only from other academics, but also from HEFCE (and its successor on these issues, Research England) and publishers.