Skip to main content

Research Repository

Advanced Search

Judging Research Papers for Research Excellence

Tymms, P.; Higgins, S.

Judging Research Papers for Research Excellence Thumbnail



The UK’s Research Excellence Framework of 2014 was an expensive high stakes evaluation which had a range of impacts on higher education institutions across the country. One component was an assessment of the quality of research outputs where a major feature was a series of panels organised to read and rate the outputs of their peers. Quality control was strengthened after the Research Assessment Exercise of 2008, but questions still remain about how fair it is to rate all papers on the same scale by raters who may vary in both their reliability and their severity/leniency. This paper takes data from a large department in which 23 senior staff rated the outputs from 42 academics. In total, 710 ratings were recorded. The analyses, using the Rasch model, showed that: a single scale described the data well; most raters were reliable although two were idiosyncratic; there was, however, a noticeable variation in the severity/leniency of the raters, which should be taken into account in the overall assessment. Suggestions for future exercises include a pre-appointment procedure for panel members and statistical adjustments for the severity/leniency of raters.


Tymms, P., & Higgins, S. (2018). Judging Research Papers for Research Excellence. Studies in Higher Education, 43(9), 1548-1560.

Journal Article Type Article
Acceptance Date Nov 21, 2016
Online Publication Date Jan 30, 2017
Publication Date Jan 1, 2018
Deposit Date Nov 22, 2016
Publicly Available Date Jul 30, 2018
Journal Studies in Higher Education
Print ISSN 0307-5079
Electronic ISSN 1470-174X
Publisher Taylor and Francis Group
Peer Reviewed Peer Reviewed
Volume 43
Issue 9
Pages 1548-1560


You might also like

Downloadable Citations