How does student accountability effect peer reviews?

In this series of blog posts we will dive into the literature around peer review and peer feedback. Each post will summarize the main findings of a different academic paper. Find all our research summaries here.

"Accountability in peer assessment: examining the effects of reviewing grades on peer ratings and peer feedback"
Authors:
Melissa M. Patchan, Learning Sciences & Human Development, West Virginia University, Morgantown, WV, USA
Christian D. Schunn, Learning Research & Development Center, University of Pittsburgh, Pittsburgh, PA, USA;
Russell J. Clark, Physics & Astronomy, University of Pittsburgh, Pittsburgh, PA, USA

What was the main research question and setup?

The researchers seek to investigate the impact student “accountability” has on peer assessment. Specifically, they looked at the “rating” (the grade) and the “comments” (the general, non-grade feedback).

One course (287 undergraduate students) was studied with a fairly standard process, students submit an assignment and review their peers using an online tool called SWoRD.

Students provided a rating and comments using a rubric with scale and text questions. The students were given 14 days to review four different peer submissions, and they then gave reactions to their feedback afterwards (“helpfulness” on a seven-point scale).

The students in the course were split into three groups with different “accountability” factors:

  1. Students were accountable for the consistency of the “rating” – did the grade they give match what other students gave the same submission?
  2. Students were accountable for the quality of their “comments” – did the feedback they gave get a high “helpfulness” score?
  3. Students were accountable both on the “rating” and the “comments” part.

For all groups, 3% of their final grade in the course depended on this “accountability” factor.

It’s interesting to note that, given that they would like to investigate the impact of accountability, they did not have a control group which was not held accountable for their feedback.

Furthermore, 70% of the students forgot what they were actually accountable for. They were asked through a small survey at the beginning of the review which group they were in, which split the students into 3 new groups – their perceived accountability factors. The original groups are called the assigned accountability factors. In the results of the article only the perceived accountability factors are used, as it provides more meaningful results. Unfortunately, it also introduces selection bias as the students effectively self-selected their group.

The results presented are interesting nonetheless, although their validity can be taken into question. There are some issues with the perceived accountability. 49 students expressed in the survey that they were not accountable for their review (neither on the “rating” nor the “comments” part). This would effectively be a control group, but the paper goes on to say that this group is not significantly different from the rest of the groups.

“The current study’s findings are consistent with prior research that has demonstrated that constructing feedback is an important contributor to helping students learn how to write – rather than just evaluating the quality of a peer’s work (Lu and Law 2012; Wooley et al. 2008).”

What were the results?

To sum up the results concisely, the group that believed they were accountable for only the “comments” part performed better on all parameters – feedback volume, rating consistency, localized comments.

“Moreover, producing higher quality comments may have a stronger influence on the consistency of ratings than assigning a reviewing grade that reflects the rating consistency – that is, although providing comments may have an effect on rating quality, when reviewers are held accountable for producing higher quality comments, the effect is even more distinct.”

What is interesting about these results is it shows that students will give more accurate feedback when they are graded on how good their written comments are and not on how accurate they are as graders. That will lead them to not just write better comments, but also become more accurate!

Share this article

Recent blog posts

How to create effective Feedback Rubrics – The ebook
This is an updated version of our popular 2017 Guide to Feedback Rubrics with new examples and more great insight into how to implement peer feedback and rubrics. Feedback rubrics are so much more than a matrix, or a single point rubric. A feedback rubric ties together all the good parts of other rubrics and encourages […]
What is formative assessment?
Welcome to our deep dive on formative assessment. If you’re looking for the quick answer just scroll down for the tl;dr (too long; didn’t read). Otherwise join us as we take a look at what makes formative assessment so important. Formative assessment is a diagnostic tool that I’m sure each of us has used in […]
The learning benefits of providing peer reviews
In this series of blog posts we will dive into the literature around peer review and peer feedback. Each post will summarize the main findings of a different academic paper, find all our research summaries here. Technology-Enhanced Peer Review: Benefits and Implications of Providing Multiple Reviews. 2017 Authors: Pantelis M. Papadopoulos, Thomas D. Lagkas, and Stavros N. […]