Skip to main content

Research Repository

Advanced Search

Addressing Bias to Improve Reliability in Peer Review of Programming Coursework

Bradley, Steven

Addressing Bias to Improve Reliability in Peer Review of Programming Coursework Thumbnail


Authors



Abstract

Peer review has many potential pedagogical benefits, particularly in the area of programming, where it is a part of everyday professional practice. Although sometimes used for formative assessment, it is less commonly used for summative assessment, partly because of a perceived difficulty with reliability. We explore the use of a hierarchical Bayesian model to account for varying bias and precision amongst student assessors. We show that the model is sound and produces benefits in assessment reliability in real assessments. Such analyses have been used in essay subjects before but not, to our knowledge, within programming.

Citation

Bradley, S. (2019, November). Addressing Bias to Improve Reliability in Peer Review of Programming Coursework. Presented at Koli Calling 2019, Finland

Presentation Conference Type Conference Paper (published)
Conference Name Koli Calling 2019
Acceptance Date Sep 11, 2019
Online Publication Date Nov 21, 2019
Publication Date Nov 21, 2019
Deposit Date Oct 30, 2019
Publicly Available Date Jan 28, 2020
Pages 1-19
Book Title Koli Calling '19 : proceedings of the 19th Koli Calling International Conference on Computing Education Research.
DOI https://doi.org/10.1145/3364510.3364523
Public URL https://durham-repository.worktribe.com/output/1141608

Files

Accepted Conference Proceeding (1.1 Mb)
PDF

Copyright Statement
© 2019 Copyright held by the owner/author(s). This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Koli Calling '19 : Proceedings of the 19th Koli Calling International Conference on Computing Education Research, https://doi.org/10.1145/3364510.3364523






You might also like



Downloadable Citations