How does student accountability effect peer reviews?

In this series of blog posts we will dive into the literature around peer review and peer feedback. Each post will summarize the main findings of a different academic paper. Find all our research summaries here.

"Accountability in peer assessment: examining the effects of reviewing grades on peer ratings and peer feedback"
Authors:
Melissa M. Patchan, Learning Sciences & Human Development, West Virginia University, Morgantown, WV, USA
Christian D. Schunn, Learning Research & Development Center, University of Pittsburgh, Pittsburgh, PA, USA;
Russell J. Clark, Physics & Astronomy, University of Pittsburgh, Pittsburgh, PA, USA

What was the main research question and setup?

The researchers seek to investigate the impact student “accountability” has on peer assessment. Specifically, they looked at the “rating” (the grade) and the “comments” (the general, non-grade feedback).

One course (287 undergraduate students) was studied with a fairly standard process, students submit an assignment and review their peers using an online tool called SWoRD.

Students provided a rating and comments using a rubric with scale and text questions. The students were given 14 days to review four different peer submissions, and they then gave reactions to their feedback afterwards (“helpfulness” on a seven-point scale).

The students in the course were split into three groups with different “accountability” factors:

  1. Students were accountable for the consistency of the “rating” – did the grade they give match what other students gave the same submission?
  2. Students were accountable for the quality of their “comments” – did the feedback they gave get a high “helpfulness” score?
  3. Students were accountable both on the “rating” and the “comments” part.

For all groups, 3% of their final grade in the course depended on this “accountability” factor.

It’s interesting to note that, given that they would like to investigate the impact of accountability, they did not have a control group which was not held accountable for their feedback.

Furthermore, 70% of the students forgot what they were actually accountable for. They were asked through a small survey at the beginning of the review which group they were in, which split the students into 3 new groups – their perceived accountability factors. The original groups are called the assigned accountability factors. In the results of the article only the perceived accountability factors are used, as it provides more meaningful results. Unfortunately, it also introduces selection bias as the students effectively self-selected their group.

The results presented are interesting nonetheless, although their validity can be taken into question. There are some issues with the perceived accountability. 49 students expressed in the survey that they were not accountable for their review (neither on the “rating” nor the “comments” part). This would effectively be a control group, but the paper goes on to say that this group is not significantly different from the rest of the groups.

“The current study’s findings are consistent with prior research that has demonstrated that constructing feedback is an important contributor to helping students learn how to write – rather than just evaluating the quality of a peer’s work (Lu and Law 2012; Wooley et al. 2008).”

What were the results?

To sum up the results concisely, the group that believed they were accountable for only the “comments” part performed better on all parameters – feedback volume, rating consistency, localized comments.

“Moreover, producing higher quality comments may have a stronger influence on the consistency of ratings than assigning a reviewing grade that reflects the rating consistency – that is, although providing comments may have an effect on rating quality, when reviewers are held accountable for producing higher quality comments, the effect is even more distinct.”

What is interesting about these results is it shows that students will give more accurate feedback when they are graded on how good their written comments are and not on how accurate they are as graders. That will lead them to not just write better comments, but also become more accurate!

————-

Get monthly rubric inspiration with our newsletter!

Share this article

Recent blog posts

The difference between formative and summative assessment
In our last blog post, we introduced formative assessment, talked about its strengths and where it fits into lesson planning. Formative assessment’s role in encouraging metacognition and peer cooperation is thoroughly qualitative. It’s focused on the individual students, their needs, and strengthening their learning process. We also mentioned summative assessment as a complementary form of […]
How do people perceive peer feedback?
In this series of blog posts, we will dive into the literature around peer review and peer feedback. Each post will summarize the main findings of a different academic paper, find all our research summaries here. Academics’ perceptions of the benefits and challenges of self and peer assessment in higher education Authors: Chie Adachi, Deakin […]
How effective is peer feedback for learning?
In this series of blog posts we will dive into the literature around peer review and peer feedback. Each post will summarize the main findings of a different academic paper, find all our research summaries here. Improving the effectiveness of peer feedback for learning Authors: Sarah Gielen, Elien Peeters, Filip Dochy, Patrick Onghena, Katrien Stuyven […]