James Wilsdon is a Professor of Research Policy in the Department of Politics and Director of Impact and Engagement for the Faculty of Social Sciences, and Associate Director in the Grantham Centre for Sustainable Futures at the University of Sheffield in the UK. He has been involved in many policy and think tank initiatives. Of particular interest here, he chaired an independent review of the role of metrics in the management of the UK’s research system, publishing a final report in 2015 called The Metric Tide. More recently he has chaired an expert panel on Next Generation Metrics for the European Commission.

In this conversation we talk about his experiences working in both policy think tanks and in academia, about the increasing focus on research impact for academics and how the UK has created some culture change in this direction. He also discusses issues around metric-based systems of assessments for academics and calls on us not to indulge processes of evaluation that we know empirically are bad science.

“Impact is a team sport.”

“A new breed of brokers and boundary spanners … placing a premium on a skillset that is not the traditional academic skillset.”

“Metrics are a technology and there is nothing intrinsically good or evil in them, it’s all about how they are used.”

“It is incumbent on us not to indulge processes of evaluation that we know empirically are bad science.”

He talks about (times approximate) …

01:40 Introduction of background as professor of research policy, politics of science and research and director of research impact for faculty of social sciences; and working outside of academia as director of science policy for the Royal Society

03:40 Moving from an academic context, working out of academia in policy jobs, and keeping a foot in academia through PhD and collaborations, and then coming back into the academic system proper; not being strategic about PhD and future plans when at the think tank; bridging brokering skills becoming more valued as academia more concerned with impact

06:55 Moving from think tank to university – pluses and minuses of both; pace and speed of think tank, shorter cycles, but can be too swayed by pressures of speaking to think tank audiences; in university time for longer deeper research when you get the funding; just different; think tank more proximate to power and potential to impact policy debates, in university setting harder to earn that seat at the table; impact.

10:30 About having impact as an academic? His role is facilitating academics having impact, part networks, part credibility; for faculty supporting academics at different career stages to strengthen their approach; also in the UK, the Research Excellence Framework (REF) that has 20% of its weighting on impact and needing to think about impact case studies now for next REF cycle; an industry of box ticking around the REF just as much as anywhere else; argues reason to do impact is not the REF but to have real impact, as starting point, so starting with the substance

14:30 Describing REF – institutional assessment done at disciplinary or departmental level, university makes subject-based submissions to a particular panel eg politics that assesses research outputs over 6-7year period of all the politics departments in the country...

James Wilsdon is a Professor of Research Policy in the Department of Politics and Director of Impact and Engagement for the Faculty of Social Sciences, and Associate Director in the Grantham Centre for Sustainable Futures at the University of Sheffield in the UK. He has been involved in many policy and think tank initiatives. Of particular interest here, he chaired an independent review of the role of metrics in the management of the UK’s research system, publishing a final report in 2015 called The Metric Tide. More recently he has chaired an expert panel on Next Generation Metrics for the European Commission.

In this conversation we talk about his experiences working in both policy think tanks and in academia, about the increasing focus on research impact for academics and how the UK has created some culture change in this direction. He also discusses issues around metric-based systems of assessments for academics and calls on us not to indulge processes of evaluation that we know empirically are bad science.

“Impact is a team sport.”

“A new breed of brokers and boundary spanners … placing a premium on a skillset that is not the traditional academic skillset.”

“Metrics are a technology and there is nothing intrinsically good or evil in them, it’s all about how they are used.”

“It is incumbent on us not to indulge processes of evaluation that we know empirically are bad science.”

He talks about (times approximate) …

01:40 Introduction of background as professor of research policy, politics of science and research and director of research impact for faculty of social sciences; and working outside of academia as director of science policy for the Royal Society

03:40 Moving from an academic context, working out of academia in policy jobs, and keeping a foot in academia through PhD and collaborations, and then coming back into the academic system proper; not being strategic about PhD and future plans when at the think tank; bridging brokering skills becoming more valued as academia more concerned with impact

06:55 Moving from think tank to university – pluses and minuses of both; pace and speed of think tank, shorter cycles, but can be too swayed by pressures of speaking to think tank audiences; in university time for longer deeper research when you get the funding; just different; think tank more proximate to power and potential to impact policy debates, in university setting harder to earn that seat at the table; impact.

10:30 About having impact as an academic? His role is facilitating academics having impact, part networks, part credibility; for faculty supporting academics at different career stages to strengthen their approach; also in the UK, the Research Excellence Framework (REF) that has 20% of its weighting on impact and needing to think about impact case studies now for next REF cycle; an industry of box ticking around the REF just as much as anywhere else; argues reason to do impact is not the REF but to have real impact, as starting point, so starting with the substance

14:30 Describing REF – institutional assessment done at disciplinary or departmental level, university makes subject-based submissions to a particular panel eg politics that assesses research outputs over 6-7year period of all the politics departments in the country in their area and scores accordingly; 65% on research outputs, primary unit is journal article, 15% about research environment, 20% on impact, here through narrative case studies. Not all academics expected to have an impact case study, usually 1 out of 10. Real money attached to it, as research funding allocated to universities on basis of scores, strategic research funding very valuable to institutions.

17:42 At what costs? Huge debate. Considerable amount of effort. Have just gone through a government review of the exercise, led by Lord Stern. Conclusion was exercise was working effectively and valuable because a trusted accepted mechanism on both sides and provides the accountability for allocation of substantial money. On uni side, while cumbersome and takes a lot of work, a self-governed process. A lot of the debate rests on what’s it purpose is it good value for money; if purpose just to allocate that grant could do it with a lighter touch or purely metric basis. Reason for Metric Tide review.

20:24 REF as it has evolved, now been through successive cycles since the mid 80s and it (REF) has now taken on range of purposes: allocation of funds; accountability mechanism; benchmarking function; driving culture behaviour change through the uni system, affecting wholesale change. In Thatcher times, focus on improving productivity of unis and still has pronounced effect eg UK has most productive research system in the word based on pounds in papers out. Now in part driven by the REF. Productivity a part of that. But in terms of behaviour change, introduction in 2014 of impact as a focus alongside outputs has had a massive cultural effect, positive effect in terms of creating an incentive structure/economy and enabling a more strategic and professional approach to impact, and supportive of a more diverse career paths in the system. China as alternative example, cash bonuses for publications, personal profit, but led to huge problems. In British system, had focus on outputs, now a focus on impacts and by and large a good thing.

25:24 How it now impacts appraisal discussions with staff.  Now have research, teaching, impact. A good thing, good research will have impact. Accepts some areas of research where impacts much longer term e.g., particle physics. Value as part of portfolio of what they do, now system in place to support academics doing it (impact) and doing it better and rewarding them. Now have a body of case studies from the exercise by topic, institution, discipline – a great resource. Means we can be much more strategic of understanding of how impacts arise. Most impact case studies were based on some kind of multi or interdisciplinary research, and often collaborative. Impact is a team sport.

29:20 Funding in UK to support that interdisciplinary emphasis? On the cusp of biggest shake up of funding system in the UK. Since mid 1960s a set of discipline-based research councils (see links below). All are about to be drawn under umbrella of a new mega funding agency, comes into being April 2018. Existing councils will still exist as committees under that body but goal now better support and enable cross disciplinary work. That’s the ambition. A big shift. Other things that have happened alongside that to further incentivise greater inter-disciplinarily are two big new strategic funding sources: global challenges research fund from aid budget, development money so research relevant to needs of developing worlds and in collaboration with partners in eligible countries, starting with their problems, and more global impact; and other is around industrial strategy, pump priming commercial realisation, not been as good at that e.g., as Germany to do the translational funding, more immediate commercial impact with industry partners.

34:45 Also opening new career paths. He talks about this as a new breed of brokers and boundary spanners that the system now demands and placing a premium on a skillset that is not the traditional academic skillset. Has flow on effects for how we think about doctoral training, early career research. But how does a boundary spanner submit e.g., to the politics panel? An inbuilt tension in the system over time. If you push the system towards more interdisciplinary work should you come back and evaluate people in the politics department? A question for the REF in 2027. Now is the time to start thinking about this. If you push all the incentives in the system towards new ways of working design, how do we design the assessment system in 10 years time? Incentives drive behaviour so how do we have complementary incentives systems. Two schools of thought on the new mega structure, negative is its terrible monolithic and inhibit diversity in the system, positive is it allows us to be more strategic and more collective intelligence to arise. By and large he is focussed on the positive.

38:00 What are the issues around metrics? The Metric Tide (report) was commissioned by the minister on role of metrics in management of university system. REF is by peer review over a year, it is labour intensive not metric driven. Looked at whole system for the REF. Committee had mix of great people, and did consultation, workshops, etc, a big process. Conclusion was that in the narrow context of the REF, more negatives in going hard to a metric-based systems than positives, in that yes you might remove some of the burden of the exercise but you shake off a lot of what was good about the REF. Current allows for a whole diversity of different outcomes, journal articles a part but can also put in books or arts-based outputs. Metrics tend only to cover journal articles. In politics area, about a quarter of the outputs were books and monographs but you don’t get metrics for those. Another reason is concern for diversity e.g., gendered nature of citation practices. Also re impact, currently recording through narrative case studies and can’t easily convert that to a metric. New metrics coming up e.g., social media measures but again could unleash perverse behavioural consequences like twitter bots if included in REF.

42:55 Interpreted mission more broadly though and in the broader sense of how metrics are interpreted and used in the university context, they expressed a serious concern about rising pressure of quantification on academic culture and how to manage that sensibly. Argued for scope to govern and manage systems of measurement much more sensibly, intelligently, and humanely in terms of their effects. A lot of that is about being responsible in the way you design and use metrics. Metrics are a technology and there is nothing intrinsically good or evil in them, it’s all about how they are used. Came up with set of principles for how metrics should be used eg diversity of indicators. More awareness now than 2-3 years ago, not just their review but growing chorus of voice gathering in volume and intensity internationally eg San Francisco Declaration on Research Assessment that came out 2012 or 13, pushing hard against emphasis on journal impact factors, the Leiden Manifesto for Research Metrics which was closely aligned with what they were doing.

Seeing in the UK more universities adopting policies and statements of good practice in terms of how they will use bibliometrics and altmetrics. Also having an impact on the REF in not going to bibliometrics.

46:50 Impact on own CV and presenting academic persona? Would never use journal impact factors and h-indices to make decisions, would look very bad. Wouldn’t use it in a panel because he thinks there are better ways of dealing with filtering applicants. “I think to simply look and say they’ve published in Cell therefore they’re better than this person… is the worst kind of sloppy practice. And we know this is statistically illiterate…. A very hard-edged reason why this is bad practice. It is incumbent on us not to indulge processes of evaluation that we know empirically are bad science.” All sorts of subtle signifiers we use and academia is full of these. “All we can do if you’re on an interview panel or evaluating stuff at a departmental level is try to be very conscious of what you’re doing, being quite reflexive about it and do stamp explicit bad practices.” Hasn’t experienced resistance to this where he is. “It’s my friends who are the hard-core scientists and who have looked at this and realised what utter bullshit it is.”

50:33 End

Related Links

James Wilsdon - https://www.sheffield.ac.uk/politics/people/academic/james-wilsdon

UK research funding councils – Higher Education Funding Council - http://www.hefce.ac.uk

Research Excellence Framework (REF) - http://www.ref.ac.uk/2014/

The Metric Tide report – https://responsiblemetrics.org/the-metric-tide/

San Francisco Declaration on Research Assessment - http://www.ascb.org/dora/

Leiden Manifesto for Research Metrics - http://www.leidenmanifesto.org



This podcast uses the following third-party services for analysis:

Chartable - https://chartable.com/privacy