How does student accountability effect peer reviews?

In this series of blog posts we will dive into the literature around peer review and peer feedback. Each post will summarize the main findings of a different academic paper. Find all our research summaries here.

"Accountability in peer assessment: examining the effects of reviewing grades on peer ratings and peer feedback"
Authors:
Melissa M. Patchan, Learning Sciences & Human Development, West Virginia University, Morgantown, WV, USA
Christian D. Schunn, Learning Research & Development Center, University of Pittsburgh, Pittsburgh, PA, USA;
Russell J. Clark, Physics & Astronomy, University of Pittsburgh, Pittsburgh, PA, USA

What was the main research question and setup?

The researchers seek to investigate the impact student “accountability” has on peer assessment. Specifically, they looked at the “rating” (the grade) and the “comments” (the general, non-grade feedback).

One course (287 undergraduate students) was studied with a fairly standard process, students submit an assignment and review their peers using an online tool called SWoRD.

Students provided a rating and comments using a rubric with scale and text questions. The students were given 14 days to review four different peer submissions, and they then gave reactions to their feedback afterwards (“helpfulness” on a seven-point scale).

The students in the course were split into three groups with different “accountability” factors:

  1. Students were accountable for the consistency of the “rating” – did the grade they give match what other students gave the same submission?
  2. Students were accountable for the quality of their “comments” – did the feedback they gave get a high “helpfulness” score?
  3. Students were accountable both on the “rating” and the “comments” part.

For all groups, 3% of their final grade in the course depended on this “accountability” factor.

It’s interesting to note that, given that they would like to investigate the impact of accountability, they did not have a control group which was not held accountable for their feedback.

Furthermore, 70% of the students forgot what they were actually accountable for. They were asked through a small survey at the beginning of the review which group they were in, which split the students into 3 new groups – their perceived accountability factors. The original groups are called the assigned accountability factors. In the results of the article only the perceived accountability factors are used, as it provides more meaningful results. Unfortunately, it also introduces selection bias as the students effectively self-selected their group.

The results presented are interesting nonetheless, although their validity can be taken into question. There are some issues with the perceived accountability. 49 students expressed in the survey that they were not accountable for their review (neither on the “rating” nor the “comments” part). This would effectively be a control group, but the paper goes on to say that this group is not significantly different from the rest of the groups.

“The current study’s findings are consistent with prior research that has demonstrated that constructing feedback is an important contributor to helping students learn how to write – rather than just evaluating the quality of a peer’s work (Lu and Law 2012; Wooley et al. 2008).”

What were the results?

To sum up the results concisely, the group that believed they were accountable for only the “comments” part performed better on all parameters – feedback volume, rating consistency, localized comments.

“Moreover, producing higher quality comments may have a stronger influence on the consistency of ratings than assigning a reviewing grade that reflects the rating consistency – that is, although providing comments may have an effect on rating quality, when reviewers are held accountable for producing higher quality comments, the effect is even more distinct.”

Share this article

Recent blog posts

Getting insights into student performance in Peergrade
We recently launched our new summary page on Peergrade! 🎉 The new summary page looks at student activity on a class level giving you an oversight of who has submitted, who forgot to give feedback and how students have progressed throughout the course. Let’s take a look at just a few of the ways the […]
Teaching Digital Literacy through Fake News
Today, our classrooms are becoming more and more crowded. Demands on our time, our resources, and our students are increasing. But there’s one new topic that’s getting a lot of attention. Fake news.   Considering the far-reaching effects that fake news can have, it’s more important than ever to focus on critical thinking and strengthening […]
Flipped Learning in the University Classroom
Flipped learning put students at the center of learning and put teachers on the sidelines helping and guiding the students. Flipped classrooms vary in setup but the key component is that instructors are not standing in front of the class lecturing but answering questions and offering support as students work on projects and hands on […]