Dr Kate Charles, Head of Research & Commercialisation at the University of Portsmouth – my initial thoughts on Lord Stern’s REF Review recommendations. Comments and alternative perspectives most welcome.

Stern Review – An independent review of the REF

The Stern Review was published on 28th July, following a call for evidence early in 2016. This independent review makes recommendations on the future operation of the Research Excellence Framework (REF). The review examines the purpose and benefits of REF; problems and issues from REF2014, and sets out principles and recommendations for shaping future research assessment exercises.

The full report ‘Research Excellence Framework (REF) review: Building on success and learning from experience’ is available here:

Research Excellence Framework Review – Full Report

With further articles and comment available from a number of sources, including the Times Higher and The Conversation:

Stern review: submit all researchers to next REF

Stern review: response from the sector

Stress put on academics by the REF recognised by Stern Review

Recommendations outlined in the report, and of most interest to researchers, are set out in bold below, with my comments and further details from the report added as commentary. It is useful to bear in mind that the Stern Review was commissioned principally to look at efficiency and cost savings in future REF exercises, and it was recognised that much of this burden is currently borne by institutions –  one would hope that this key principle is not lost in the detailed planning and implementation of the recommendations!

That the broad structure of the REF should not change.. And an interval of 5-7 years between exercises is reasonable.

Staff selection and outputs

1: All research active staff should be returned in the REF.

2: Outputs should be submitted at Unit of Assessment level with a set average number per FTE but with flexibility for some faculty members to submit more and others less than the average.

3: Outputs should not be portable.

The report states that ‘all academic staff who have any significant responsibility to undertake research’ are to be returned to REF and allocated to a Unit of Assessment. With ‘a common dataset’ to ‘support the accurate description of university research and teaching staff assessed in the REF and TEF’. This could perhaps mean that a proportion of academic staff time could be attributed to teaching vs research activities, made transparent through HESA reporting, or alternatively that anyone with [contractual] responsibility/allowance for research will need to be returned to the REF in future. Clearly, more details are needed on how ‘research active’ will be interpreted by Hefce.

A concern for non-research intensive universities will be whether the funding formula will then take further account of any volume measures, if selection is removed and all academic staff are to be submitted. Some institutions moved towards teaching-only contracts prior to REF2014, but how/whether this will affect the balance between REF and/or TEF performance needs to be carefully considered by institutions. This recommendation, together with the decoupling of outputs from individuals (see below) may in fact counter this trend towards teaching-only contracts, or it may accelerate it depending on the all important details of REF and TEF measures and how ‘research active’ is to be defined.

Another major change proposed, is to ‘decouple’ outputs from individuals. In REF2014, four outputs were required per FTE with reductions available for ‘special circumstances’. This put the onus on institutions and individuals to demonstrate specific individual circumstances, such as sickness absence, which may have resulted in a reduction of productivity during the REF period. This was clearly a very sensitive area for staff, so I believe that in general this move which would allow a unit to focus on the highest quality outputs across the unit as a whole will be welcomed by most. The suggested number of outputs is the equivalent of two per FTE submitted on average, but within a suggested range of zero to six per FTE.

To prevent the REF ‘transfer window’ the suggestion is that outputs should remain with the institution where they were produced. This would create a consistency with impact which remains with the institution in which the research was done. However, this suggestion may result in some further complications, where individuals/institutions are required to prove acceptance dates for research outputs, when a member of staff has moved institutions during the assessed period. The date of acceptance is already required by the REF Open Access requirements for journal articles and conference proceedings (articles must be deposited in an institutional repository within 3 months of acceptance), but how this will extend to other works such as monographs, books and book chapters is yet to be seen. Further, whilst the recommendation that ‘outputs should be submitted only by the institution where the output was demonstrably generated’ and the ’HEI where [the author] was based when the output was accepted’ is generally fair, the two may not be entirely congruent –  it implies that outputs are prepared immediately as/after the work is produced and/or that there is no time lag between submitting an article and getting it accepted for publication, during which an author may transfer to another institution. Whilst I agree with the spirit of this recommendation, implementation would appear to add further complexity, rather than reduce the administrative burden of REF on institutions.

Assessment of outputs

4: Panels should continue to assess on the basis of peer review. However, metrics should be provided to support panel members in their assessment, and panels should be transparent about their use.

In line with the Metric Tide recommendations, Stern has recognised that metrics alone cannot replace robust peer review, but is supportive of the ‘appropriate use of bibliometric data’ by panels. The recommendation is that the weighting for outputs remains at 65%.

Impact

5: Institutions should be given more flexibility to showcase their interdisciplinary and collaborative impacts by submitting ‘institutional’ level impact case studies, part of a new institutional level assessment.

6: Impact must be based on research of demonstrable quality. However, case studies could be linked to a research activity and a body of work as well as to a broad range of research outputs.

7: Guidance on the REF should make it clear that impact case studies should not be narrowly interpreted, need not solely focus on socio-economic impacts but should also include impact on government policy, on public engagement and understanding, on cultural life, on academic impacts outside the field, and impacts on teaching.

On impact, there are suggestions for broadening the range of impacts, decoupling impacts from outputs, and reducing the number of impact case studies required with a ‘minimum of one impact case study in each Unit of Assessment (down from a minimum of two in REF2014)’. The main change here is for institutional level case studies, which could reflect the strategic cross-faculty approach to impact and showcase interdisciplinary work. At Portsmouth, the development of our interdisciplinary R&I themes could be key to developing these institutional level case studies. As regards broadening impact types, noting suggestions for greater inclusion of cultural impacts and public engagement, the devil will be in the detail – these types of impact are generally thought to be harder to robustly evidence. So, it remains to be seen whether the sector will fully embrace these suggestions for impact which were largely allowed in REF2014, and practically whether institutions will still favour impact case studies for which harder evidence can perhaps more easily be provided, e.g. economic impacts.

As anticipated, the recommendation is that the ‘Impact Template’ from REF2014 is absorbed into the Environment Template in future, with the overall weighting for impact comprising no less than 20%. So the weighting of individual impact case studies would be marginally increased, through the removal of the impact template and for smaller units submitting a single case study – this would be high currency indeed!

Environment

8: A new, institutional level Environment assessment should include an account of the institution’s future research environment strategy, a statement of how it supports high quality research and research-related activities, including its support for interdisciplinary and cross-institutional initiatives and impact. It should form part of the institutional assessment and should be assessed by a specialist, cross-disciplinary panel.

9: That individual Unit of Assessment environment statements are condensed, made complementary to the institutional level environment statement and include those key metrics on research intensity specific to the Unit of Assessment.

This is ‘institutional level Environment assessment’ is, I believe, generally a nod to research-intensives, which during the call for evidence proposed aggregated, volume-measures for research assessment. It is perhaps at odds with the recommendation for ‘supporting excellence wherever it is found’ which is retained as a key principle for REF and the RAND Europe/KCL report on high performing research units, which identified that they were largely autonomous from the institution in terms of culture and environment! In terms of reducing the burden, however, there should be economies for such institutional statements and retention of the UoA statements should hopefully allow for pockets of excellence to be sufficiently demonstrated. By a process of deduction (and simple mathematics) it is suggested that Environment weighting will remain at 15%.

 

Of course, it is important to note that these are recommendations at this stage and that a full consultation on implementation will need to take place. Indeed, Hefce have stated:

”The UK government and the devolved administrations will now need to consider the report and make their views known on next steps. The UK higher education funding bodies will also carefully consider the recommendations. Subject to the views of our respective ministers, we intend to launch a consultation as soon as possible before the end of 2016.”

Next Steps

In order to fulfil the Government’s ambition and complete the next REF by 2021, the following timetable is proposed in the Stern report:

  • Through the summer and autumn, the UK Governments and funding councils should work together to translate the principles outlined in this report into the structures and formulae for submissions. Work will be required to test the impact of our proposals on scope for game playing and to mitigate against unintended consequences. Further work might also be required to model or pilot new ideas.
  • By the end of the year a formal consultation should be issued so that the community can offer their views on the proposed process and the future REF formula. The decisions arising from this consultation should be published in the summer of 2017.
  • As the shape of later years of the TEF exercise are revealed, Government should check the consistency of the two exercises and work to promote complementarity and mutual support and to alleviate tensions and burdens.
  • The proposed process will allow sufficient time for universities to prepare for submissions to be collected in 2020 and the assessment to take place in 2021, with the final outcome of the next REF exercise to be published by the end of 2021 for implementation of the funding settlement in 2022.

 

Comments and discussion of this post and the Stern review are welcomed – please add them below.  Alternatively, if you would like to contact anyone from Research and Innovation Services regarding this post please email ris@port.ac.uk